00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 424 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3086 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.039 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.081 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.159 Using shallow fetch with depth 1 00:00:00.159 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.159 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.226 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.226 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.589 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.601 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.612 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:05.613 > git config core.sparsecheckout # timeout=10 00:00:05.624 > git read-tree -mu HEAD # timeout=10 00:00:05.641 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:05.658 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:05.658 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:05.750 [Pipeline] Start of Pipeline 00:00:05.765 [Pipeline] library 00:00:05.767 Loading library shm_lib@master 00:00:05.767 Library shm_lib@master is cached. Copying from home. 00:00:05.789 [Pipeline] node 00:00:05.808 Running on WFP31 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.810 [Pipeline] { 00:00:05.820 [Pipeline] catchError 00:00:05.822 [Pipeline] { 00:00:05.835 [Pipeline] wrap 00:00:05.845 [Pipeline] { 00:00:05.852 [Pipeline] stage 00:00:05.854 [Pipeline] { (Prologue) 00:00:06.036 [Pipeline] sh 00:00:06.327 + logger -p user.info -t JENKINS-CI 00:00:06.347 [Pipeline] echo 00:00:06.348 Node: WFP31 00:00:06.355 [Pipeline] sh 00:00:06.657 [Pipeline] setCustomBuildProperty 00:00:06.669 [Pipeline] echo 00:00:06.670 Cleanup processes 00:00:06.675 [Pipeline] sh 00:00:06.961 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.961 540525 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.974 [Pipeline] sh 00:00:07.260 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.260 ++ grep -v 'sudo pgrep' 00:00:07.260 ++ awk '{print $1}' 00:00:07.260 + sudo kill -9 00:00:07.260 + true 00:00:07.275 [Pipeline] cleanWs 00:00:07.284 [WS-CLEANUP] Deleting project workspace... 00:00:07.284 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.291 [WS-CLEANUP] done 00:00:07.297 [Pipeline] setCustomBuildProperty 00:00:07.311 [Pipeline] sh 00:00:07.597 + sudo git config --global --replace-all safe.directory '*' 00:00:07.670 [Pipeline] nodesByLabel 00:00:07.672 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.683 [Pipeline] httpRequest 00:00:07.689 HttpMethod: GET 00:00:07.689 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:07.698 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:07.716 Response Code: HTTP/1.1 200 OK 00:00:07.716 Success: Status code 200 is in the accepted range: 200,404 00:00:07.717 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:10.663 [Pipeline] sh 00:00:10.949 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:10.967 [Pipeline] httpRequest 00:00:10.972 HttpMethod: GET 00:00:10.973 URL: http://10.211.164.101/packages/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:10.973 Sending request to url: http://10.211.164.101/packages/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:10.991 Response Code: HTTP/1.1 200 OK 00:00:10.991 Success: Status code 200 is in the accepted range: 200,404 00:00:10.992 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:37.656 [Pipeline] sh 00:00:37.941 + tar --no-same-owner -xf spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:41.249 [Pipeline] sh 00:00:41.532 + git -C spdk log --oneline -n5 00:00:41.532 4506c0c36 test/common: Enable inherit_errexit 00:00:41.532 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:00:41.532 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:00:41.532 1dc065205 test/scheduler: Calculate median of the cpu load samples 00:00:41.532 b22f1b34d test/scheduler: Enhance lookup of the $old_cgroup in move_proc() 00:00:41.552 [Pipeline] withCredentials 00:00:41.563 > git --version # timeout=10 00:00:41.577 > git --version # 'git version 2.39.2' 00:00:41.602 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:41.604 [Pipeline] { 00:00:41.614 [Pipeline] retry 00:00:41.616 [Pipeline] { 00:00:41.635 [Pipeline] sh 00:00:42.150 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:42.163 [Pipeline] } 00:00:42.185 [Pipeline] // retry 00:00:42.191 [Pipeline] } 00:00:42.211 [Pipeline] // withCredentials 00:00:42.223 [Pipeline] httpRequest 00:00:42.228 HttpMethod: GET 00:00:42.229 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:42.233 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:42.241 Response Code: HTTP/1.1 200 OK 00:00:42.242 Success: Status code 200 is in the accepted range: 200,404 00:00:42.242 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:45.138 [Pipeline] sh 00:00:45.424 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:47.345 [Pipeline] sh 00:00:47.627 + git -C dpdk log --oneline -n5 00:00:47.627 eeb0605f11 version: 23.11.0 00:00:47.627 238778122a doc: update release notes for 23.11 00:00:47.627 46aa6b3cfc doc: fix description of RSS features 00:00:47.627 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:47.627 7e421ae345 devtools: support skipping forbid rule check 00:00:47.638 [Pipeline] } 00:00:47.652 [Pipeline] // stage 00:00:47.659 [Pipeline] stage 00:00:47.661 [Pipeline] { (Prepare) 00:00:47.682 [Pipeline] writeFile 00:00:47.703 [Pipeline] sh 00:00:47.989 + logger -p user.info -t JENKINS-CI 00:00:48.002 [Pipeline] sh 00:00:48.284 + logger -p user.info -t JENKINS-CI 00:00:48.298 [Pipeline] sh 00:00:48.580 + cat autorun-spdk.conf 00:00:48.580 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.580 SPDK_TEST_NVMF=1 00:00:48.580 SPDK_TEST_NVME_CLI=1 00:00:48.580 SPDK_TEST_NVMF_NICS=mlx5 00:00:48.580 SPDK_RUN_UBSAN=1 00:00:48.580 NET_TYPE=phy 00:00:48.580 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:48.580 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:48.587 RUN_NIGHTLY=1 00:00:48.594 [Pipeline] readFile 00:00:48.621 [Pipeline] withEnv 00:00:48.623 [Pipeline] { 00:00:48.638 [Pipeline] sh 00:00:48.929 + set -ex 00:00:48.929 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:48.929 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:48.929 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.929 ++ SPDK_TEST_NVMF=1 00:00:48.929 ++ SPDK_TEST_NVME_CLI=1 00:00:48.929 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:48.929 ++ SPDK_RUN_UBSAN=1 00:00:48.929 ++ NET_TYPE=phy 00:00:48.929 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:48.929 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:48.929 ++ RUN_NIGHTLY=1 00:00:48.929 + case $SPDK_TEST_NVMF_NICS in 00:00:48.929 + DRIVERS=mlx5_ib 00:00:48.929 + [[ -n mlx5_ib ]] 00:00:48.929 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:48.929 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:55.501 rmmod: ERROR: Module irdma is not currently loaded 00:00:55.501 rmmod: ERROR: Module i40iw is not currently loaded 00:00:55.501 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:55.501 + true 00:00:55.501 + for D in $DRIVERS 00:00:55.501 + sudo modprobe mlx5_ib 00:00:55.501 + exit 0 00:00:55.512 [Pipeline] } 00:00:55.530 [Pipeline] // withEnv 00:00:55.536 [Pipeline] } 00:00:55.553 [Pipeline] // stage 00:00:55.565 [Pipeline] catchError 00:00:55.567 [Pipeline] { 00:00:55.582 [Pipeline] timeout 00:00:55.582 Timeout set to expire in 40 min 00:00:55.584 [Pipeline] { 00:00:55.601 [Pipeline] stage 00:00:55.603 [Pipeline] { (Tests) 00:00:55.620 [Pipeline] sh 00:00:55.912 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:55.912 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:55.912 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:55.912 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:55.912 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:55.912 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:55.912 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:55.912 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:55.912 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:55.912 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:55.912 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:55.912 + source /etc/os-release 00:00:55.912 ++ NAME='Fedora Linux' 00:00:55.912 ++ VERSION='38 (Cloud Edition)' 00:00:55.912 ++ ID=fedora 00:00:55.912 ++ VERSION_ID=38 00:00:55.912 ++ VERSION_CODENAME= 00:00:55.912 ++ PLATFORM_ID=platform:f38 00:00:55.912 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:55.912 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:55.912 ++ LOGO=fedora-logo-icon 00:00:55.912 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:55.912 ++ HOME_URL=https://fedoraproject.org/ 00:00:55.912 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:55.912 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:55.912 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:55.912 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:55.912 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:55.912 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:55.912 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:55.912 ++ SUPPORT_END=2024-05-14 00:00:55.912 ++ VARIANT='Cloud Edition' 00:00:55.912 ++ VARIANT_ID=cloud 00:00:55.912 + uname -a 00:00:55.912 Linux spdk-wfp-31 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:55.912 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:00:59.233 Hugepages 00:00:59.233 node hugesize free / total 00:00:59.233 node0 1048576kB 0 / 0 00:00:59.233 node0 2048kB 0 / 0 00:00:59.233 node1 1048576kB 0 / 0 00:00:59.233 node1 2048kB 0 / 0 00:00:59.233 00:00:59.233 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:59.233 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:59.233 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:59.233 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:59.233 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:59.233 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:59.233 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:59.233 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:59.233 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:59.233 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:59.233 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:59.233 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:59.233 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:59.233 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:59.233 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:59.233 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:59.233 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:59.233 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:59.233 + rm -f /tmp/spdk-ld-path 00:00:59.233 + source autorun-spdk.conf 00:00:59.233 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.233 ++ SPDK_TEST_NVMF=1 00:00:59.233 ++ SPDK_TEST_NVME_CLI=1 00:00:59.233 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:59.233 ++ SPDK_RUN_UBSAN=1 00:00:59.233 ++ NET_TYPE=phy 00:00:59.233 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:59.233 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:59.233 ++ RUN_NIGHTLY=1 00:00:59.233 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:59.233 + [[ -n '' ]] 00:00:59.233 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:59.233 + for M in /var/spdk/build-*-manifest.txt 00:00:59.233 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:59.233 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:59.233 + for M in /var/spdk/build-*-manifest.txt 00:00:59.233 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:59.233 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:59.233 ++ uname 00:00:59.233 + [[ Linux == \L\i\n\u\x ]] 00:00:59.233 + sudo dmesg -T 00:00:59.233 + sudo dmesg --clear 00:00:59.233 + dmesg_pid=541552 00:00:59.233 + [[ Fedora Linux == FreeBSD ]] 00:00:59.233 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:59.233 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:59.233 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:59.233 + [[ -x /usr/src/fio-static/fio ]] 00:00:59.233 + export FIO_BIN=/usr/src/fio-static/fio 00:00:59.233 + FIO_BIN=/usr/src/fio-static/fio 00:00:59.233 + sudo dmesg -Tw 00:00:59.233 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:59.233 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:59.233 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:59.233 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:59.233 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:59.233 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:59.233 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:59.233 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:59.233 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:59.233 Test configuration: 00:00:59.233 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.233 SPDK_TEST_NVMF=1 00:00:59.233 SPDK_TEST_NVME_CLI=1 00:00:59.233 SPDK_TEST_NVMF_NICS=mlx5 00:00:59.233 SPDK_RUN_UBSAN=1 00:00:59.233 NET_TYPE=phy 00:00:59.233 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:59.233 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:59.233 RUN_NIGHTLY=1 02:27:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:00:59.233 02:27:02 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:59.233 02:27:02 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:59.233 02:27:02 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:59.233 02:27:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.233 02:27:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.233 02:27:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.233 02:27:02 -- paths/export.sh@5 -- $ export PATH 00:00:59.234 02:27:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:59.234 02:27:02 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:00:59.234 02:27:02 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:59.234 02:27:02 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715732822.XXXXXX 00:00:59.234 02:27:02 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715732822.RC2CJ6 00:00:59.234 02:27:02 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:59.234 02:27:02 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:00:59.234 02:27:02 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:59.234 02:27:02 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:00:59.234 02:27:02 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:59.234 02:27:02 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:59.234 02:27:02 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:59.234 02:27:02 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:59.234 02:27:02 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.234 02:27:02 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:00:59.234 02:27:02 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:59.234 02:27:02 -- pm/common@17 -- $ local monitor 00:00:59.234 02:27:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.234 02:27:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.234 02:27:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.234 02:27:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:59.234 02:27:02 -- pm/common@21 -- $ date +%s 00:00:59.234 02:27:02 -- pm/common@25 -- $ sleep 1 00:00:59.234 02:27:02 -- pm/common@21 -- $ date +%s 00:00:59.234 02:27:02 -- pm/common@21 -- $ date +%s 00:00:59.234 02:27:02 -- pm/common@21 -- $ date +%s 00:00:59.234 02:27:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715732822 00:00:59.234 02:27:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715732822 00:00:59.234 02:27:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715732822 00:00:59.234 02:27:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715732822 00:00:59.234 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715732822_collect-vmstat.pm.log 00:00:59.234 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715732822_collect-cpu-load.pm.log 00:00:59.234 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715732822_collect-cpu-temp.pm.log 00:00:59.234 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715732822_collect-bmc-pm.bmc.pm.log 00:01:00.173 02:27:03 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:00.173 02:27:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:00.173 02:27:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:00.173 02:27:03 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:00.173 02:27:03 -- spdk/autobuild.sh@16 -- $ date -u 00:01:00.173 Wed May 15 12:27:03 AM UTC 2024 00:01:00.173 02:27:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:00.173 v24.05-pre-658-g4506c0c36 00:01:00.173 02:27:03 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:00.173 02:27:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:00.173 02:27:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:00.173 02:27:03 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:01:00.173 02:27:03 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:01:00.173 02:27:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.173 ************************************ 00:01:00.173 START TEST ubsan 00:01:00.173 ************************************ 00:01:00.173 02:27:03 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:01:00.173 using ubsan 00:01:00.173 00:01:00.173 real 0m0.000s 00:01:00.173 user 0m0.000s 00:01:00.173 sys 0m0.000s 00:01:00.173 02:27:03 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:01:00.173 02:27:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:00.173 ************************************ 00:01:00.173 END TEST ubsan 00:01:00.173 ************************************ 00:01:00.173 02:27:03 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:00.173 02:27:03 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:00.173 02:27:03 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:00.173 02:27:03 -- common/autotest_common.sh@1098 -- $ '[' 2 -le 1 ']' 00:01:00.173 02:27:03 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:01:00.173 02:27:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.173 ************************************ 00:01:00.173 START TEST build_native_dpdk 00:01:00.173 ************************************ 00:01:00.173 02:27:03 build_native_dpdk -- common/autotest_common.sh@1122 -- $ _build_native_dpdk 00:01:00.173 02:27:03 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:00.173 02:27:03 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:00.173 02:27:03 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:00.173 02:27:03 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:00.173 02:27:03 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:00.173 02:27:03 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:00.173 02:27:03 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:00.173 02:27:03 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:00.173 02:27:03 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:00.173 02:27:03 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:00.173 02:27:03 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:00.432 02:27:03 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:00.432 02:27:03 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:00.432 02:27:03 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:00.432 02:27:03 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:00.432 02:27:03 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:00.432 02:27:03 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:00.432 02:27:03 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:00.433 eeb0605f11 version: 23.11.0 00:01:00.433 238778122a doc: update release notes for 23.11 00:01:00.433 46aa6b3cfc doc: fix description of RSS features 00:01:00.433 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:00.433 7e421ae345 devtools: support skipping forbid rule check 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:00.433 02:27:03 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:00.433 patching file config/rte_config.h 00:01:00.433 Hunk #1 succeeded at 60 (offset 1 line). 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:00.433 02:27:03 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:05.711 The Meson build system 00:01:05.711 Version: 1.3.1 00:01:05.711 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:05.711 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:01:05.711 Build type: native build 00:01:05.711 Program cat found: YES (/usr/bin/cat) 00:01:05.711 Project name: DPDK 00:01:05.711 Project version: 23.11.0 00:01:05.711 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:05.711 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:05.711 Host machine cpu family: x86_64 00:01:05.711 Host machine cpu: x86_64 00:01:05.711 Message: ## Building in Developer Mode ## 00:01:05.711 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:05.711 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:05.711 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:05.711 Program python3 found: YES (/usr/bin/python3) 00:01:05.711 Program cat found: YES (/usr/bin/cat) 00:01:05.711 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:05.711 Compiler for C supports arguments -march=native: YES 00:01:05.711 Checking for size of "void *" : 8 00:01:05.711 Checking for size of "void *" : 8 (cached) 00:01:05.711 Library m found: YES 00:01:05.711 Library numa found: YES 00:01:05.711 Has header "numaif.h" : YES 00:01:05.711 Library fdt found: NO 00:01:05.711 Library execinfo found: NO 00:01:05.711 Has header "execinfo.h" : YES 00:01:05.711 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:05.711 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:05.711 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:05.711 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:05.711 Run-time dependency openssl found: YES 3.0.9 00:01:05.711 Run-time dependency libpcap found: YES 1.10.4 00:01:05.711 Has header "pcap.h" with dependency libpcap: YES 00:01:05.711 Compiler for C supports arguments -Wcast-qual: YES 00:01:05.711 Compiler for C supports arguments -Wdeprecated: YES 00:01:05.711 Compiler for C supports arguments -Wformat: YES 00:01:05.711 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:05.711 Compiler for C supports arguments -Wformat-security: NO 00:01:05.711 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:05.711 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:05.711 Compiler for C supports arguments -Wnested-externs: YES 00:01:05.711 Compiler for C supports arguments -Wold-style-definition: YES 00:01:05.711 Compiler for C supports arguments -Wpointer-arith: YES 00:01:05.711 Compiler for C supports arguments -Wsign-compare: YES 00:01:05.711 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:05.711 Compiler for C supports arguments -Wundef: YES 00:01:05.711 Compiler for C supports arguments -Wwrite-strings: YES 00:01:05.711 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:05.711 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:05.711 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:05.711 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:05.712 Program objdump found: YES (/usr/bin/objdump) 00:01:05.712 Compiler for C supports arguments -mavx512f: YES 00:01:05.712 Checking if "AVX512 checking" compiles: YES 00:01:05.712 Fetching value of define "__SSE4_2__" : 1 00:01:05.712 Fetching value of define "__AES__" : 1 00:01:05.712 Fetching value of define "__AVX__" : 1 00:01:05.712 Fetching value of define "__AVX2__" : 1 00:01:05.712 Fetching value of define "__AVX512BW__" : 1 00:01:05.712 Fetching value of define "__AVX512CD__" : 1 00:01:05.712 Fetching value of define "__AVX512DQ__" : 1 00:01:05.712 Fetching value of define "__AVX512F__" : 1 00:01:05.712 Fetching value of define "__AVX512VL__" : 1 00:01:05.712 Fetching value of define "__PCLMUL__" : 1 00:01:05.712 Fetching value of define "__RDRND__" : 1 00:01:05.712 Fetching value of define "__RDSEED__" : 1 00:01:05.712 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:05.712 Fetching value of define "__znver1__" : (undefined) 00:01:05.712 Fetching value of define "__znver2__" : (undefined) 00:01:05.712 Fetching value of define "__znver3__" : (undefined) 00:01:05.712 Fetching value of define "__znver4__" : (undefined) 00:01:05.712 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:05.712 Message: lib/log: Defining dependency "log" 00:01:05.712 Message: lib/kvargs: Defining dependency "kvargs" 00:01:05.712 Message: lib/telemetry: Defining dependency "telemetry" 00:01:05.712 Checking for function "getentropy" : NO 00:01:05.712 Message: lib/eal: Defining dependency "eal" 00:01:05.712 Message: lib/ring: Defining dependency "ring" 00:01:05.712 Message: lib/rcu: Defining dependency "rcu" 00:01:05.712 Message: lib/mempool: Defining dependency "mempool" 00:01:05.712 Message: lib/mbuf: Defining dependency "mbuf" 00:01:05.712 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:05.712 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:05.712 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:05.712 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:05.712 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:05.712 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:05.712 Compiler for C supports arguments -mpclmul: YES 00:01:05.712 Compiler for C supports arguments -maes: YES 00:01:05.712 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:05.712 Compiler for C supports arguments -mavx512bw: YES 00:01:05.712 Compiler for C supports arguments -mavx512dq: YES 00:01:05.712 Compiler for C supports arguments -mavx512vl: YES 00:01:05.712 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:05.712 Compiler for C supports arguments -mavx2: YES 00:01:05.712 Compiler for C supports arguments -mavx: YES 00:01:05.712 Message: lib/net: Defining dependency "net" 00:01:05.712 Message: lib/meter: Defining dependency "meter" 00:01:05.712 Message: lib/ethdev: Defining dependency "ethdev" 00:01:05.712 Message: lib/pci: Defining dependency "pci" 00:01:05.712 Message: lib/cmdline: Defining dependency "cmdline" 00:01:05.712 Message: lib/metrics: Defining dependency "metrics" 00:01:05.712 Message: lib/hash: Defining dependency "hash" 00:01:05.712 Message: lib/timer: Defining dependency "timer" 00:01:05.712 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:05.712 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:05.712 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:05.712 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:05.712 Message: lib/acl: Defining dependency "acl" 00:01:05.712 Message: lib/bbdev: Defining dependency "bbdev" 00:01:05.712 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:05.712 Run-time dependency libelf found: YES 0.190 00:01:05.712 Message: lib/bpf: Defining dependency "bpf" 00:01:05.712 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:05.712 Message: lib/compressdev: Defining dependency "compressdev" 00:01:05.712 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:05.712 Message: lib/distributor: Defining dependency "distributor" 00:01:05.712 Message: lib/dmadev: Defining dependency "dmadev" 00:01:05.712 Message: lib/efd: Defining dependency "efd" 00:01:05.712 Message: lib/eventdev: Defining dependency "eventdev" 00:01:05.712 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:05.712 Message: lib/gpudev: Defining dependency "gpudev" 00:01:05.712 Message: lib/gro: Defining dependency "gro" 00:01:05.712 Message: lib/gso: Defining dependency "gso" 00:01:05.712 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:05.712 Message: lib/jobstats: Defining dependency "jobstats" 00:01:05.712 Message: lib/latencystats: Defining dependency "latencystats" 00:01:05.712 Message: lib/lpm: Defining dependency "lpm" 00:01:05.712 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:05.712 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:05.712 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:05.712 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:05.712 Message: lib/member: Defining dependency "member" 00:01:05.712 Message: lib/pcapng: Defining dependency "pcapng" 00:01:05.712 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:05.712 Message: lib/power: Defining dependency "power" 00:01:05.712 Message: lib/rawdev: Defining dependency "rawdev" 00:01:05.712 Message: lib/regexdev: Defining dependency "regexdev" 00:01:05.712 Message: lib/mldev: Defining dependency "mldev" 00:01:05.712 Message: lib/rib: Defining dependency "rib" 00:01:05.712 Message: lib/reorder: Defining dependency "reorder" 00:01:05.712 Message: lib/sched: Defining dependency "sched" 00:01:05.712 Message: lib/security: Defining dependency "security" 00:01:05.712 Message: lib/stack: Defining dependency "stack" 00:01:05.712 Has header "linux/userfaultfd.h" : YES 00:01:05.712 Has header "linux/vduse.h" : YES 00:01:05.712 Message: lib/vhost: Defining dependency "vhost" 00:01:05.712 Message: lib/ipsec: Defining dependency "ipsec" 00:01:05.712 Message: lib/pdcp: Defining dependency "pdcp" 00:01:05.712 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:05.712 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:05.712 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:05.712 Message: lib/fib: Defining dependency "fib" 00:01:05.712 Message: lib/port: Defining dependency "port" 00:01:05.712 Message: lib/pdump: Defining dependency "pdump" 00:01:05.712 Message: lib/table: Defining dependency "table" 00:01:05.712 Message: lib/pipeline: Defining dependency "pipeline" 00:01:05.712 Message: lib/graph: Defining dependency "graph" 00:01:05.712 Message: lib/node: Defining dependency "node" 00:01:05.712 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:07.092 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:07.092 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:07.092 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:07.092 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:07.092 Compiler for C supports arguments -Wno-unused-value: YES 00:01:07.092 Compiler for C supports arguments -Wno-format: YES 00:01:07.092 Compiler for C supports arguments -Wno-format-security: YES 00:01:07.092 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:07.092 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:07.092 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:07.092 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:07.092 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:07.092 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:07.092 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:07.092 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:07.092 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:07.092 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:07.092 Has header "sys/epoll.h" : YES 00:01:07.092 Program doxygen found: YES (/usr/bin/doxygen) 00:01:07.092 Configuring doxy-api-html.conf using configuration 00:01:07.092 Configuring doxy-api-man.conf using configuration 00:01:07.092 Program mandb found: YES (/usr/bin/mandb) 00:01:07.092 Program sphinx-build found: NO 00:01:07.092 Configuring rte_build_config.h using configuration 00:01:07.092 Message: 00:01:07.092 ================= 00:01:07.092 Applications Enabled 00:01:07.092 ================= 00:01:07.092 00:01:07.092 apps: 00:01:07.092 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:07.092 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:07.092 test-pmd, test-regex, test-sad, test-security-perf, 00:01:07.092 00:01:07.092 Message: 00:01:07.092 ================= 00:01:07.092 Libraries Enabled 00:01:07.092 ================= 00:01:07.092 00:01:07.092 libs: 00:01:07.092 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:07.092 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:07.092 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:07.092 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:07.092 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:07.092 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:07.092 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:07.092 00:01:07.092 00:01:07.092 Message: 00:01:07.092 =============== 00:01:07.092 Drivers Enabled 00:01:07.092 =============== 00:01:07.092 00:01:07.092 common: 00:01:07.092 00:01:07.092 bus: 00:01:07.092 pci, vdev, 00:01:07.092 mempool: 00:01:07.092 ring, 00:01:07.092 dma: 00:01:07.092 00:01:07.092 net: 00:01:07.092 i40e, 00:01:07.092 raw: 00:01:07.092 00:01:07.092 crypto: 00:01:07.092 00:01:07.092 compress: 00:01:07.092 00:01:07.092 regex: 00:01:07.092 00:01:07.092 ml: 00:01:07.092 00:01:07.092 vdpa: 00:01:07.092 00:01:07.092 event: 00:01:07.092 00:01:07.092 baseband: 00:01:07.092 00:01:07.092 gpu: 00:01:07.092 00:01:07.092 00:01:07.092 Message: 00:01:07.092 ================= 00:01:07.092 Content Skipped 00:01:07.092 ================= 00:01:07.092 00:01:07.092 apps: 00:01:07.092 00:01:07.092 libs: 00:01:07.092 00:01:07.092 drivers: 00:01:07.092 common/cpt: not in enabled drivers build config 00:01:07.092 common/dpaax: not in enabled drivers build config 00:01:07.092 common/iavf: not in enabled drivers build config 00:01:07.092 common/idpf: not in enabled drivers build config 00:01:07.092 common/mvep: not in enabled drivers build config 00:01:07.092 common/octeontx: not in enabled drivers build config 00:01:07.092 bus/auxiliary: not in enabled drivers build config 00:01:07.092 bus/cdx: not in enabled drivers build config 00:01:07.092 bus/dpaa: not in enabled drivers build config 00:01:07.092 bus/fslmc: not in enabled drivers build config 00:01:07.092 bus/ifpga: not in enabled drivers build config 00:01:07.092 bus/platform: not in enabled drivers build config 00:01:07.092 bus/vmbus: not in enabled drivers build config 00:01:07.092 common/cnxk: not in enabled drivers build config 00:01:07.092 common/mlx5: not in enabled drivers build config 00:01:07.092 common/nfp: not in enabled drivers build config 00:01:07.092 common/qat: not in enabled drivers build config 00:01:07.092 common/sfc_efx: not in enabled drivers build config 00:01:07.092 mempool/bucket: not in enabled drivers build config 00:01:07.092 mempool/cnxk: not in enabled drivers build config 00:01:07.092 mempool/dpaa: not in enabled drivers build config 00:01:07.092 mempool/dpaa2: not in enabled drivers build config 00:01:07.092 mempool/octeontx: not in enabled drivers build config 00:01:07.092 mempool/stack: not in enabled drivers build config 00:01:07.092 dma/cnxk: not in enabled drivers build config 00:01:07.092 dma/dpaa: not in enabled drivers build config 00:01:07.092 dma/dpaa2: not in enabled drivers build config 00:01:07.092 dma/hisilicon: not in enabled drivers build config 00:01:07.092 dma/idxd: not in enabled drivers build config 00:01:07.092 dma/ioat: not in enabled drivers build config 00:01:07.092 dma/skeleton: not in enabled drivers build config 00:01:07.092 net/af_packet: not in enabled drivers build config 00:01:07.092 net/af_xdp: not in enabled drivers build config 00:01:07.092 net/ark: not in enabled drivers build config 00:01:07.092 net/atlantic: not in enabled drivers build config 00:01:07.092 net/avp: not in enabled drivers build config 00:01:07.092 net/axgbe: not in enabled drivers build config 00:01:07.092 net/bnx2x: not in enabled drivers build config 00:01:07.092 net/bnxt: not in enabled drivers build config 00:01:07.092 net/bonding: not in enabled drivers build config 00:01:07.092 net/cnxk: not in enabled drivers build config 00:01:07.092 net/cpfl: not in enabled drivers build config 00:01:07.092 net/cxgbe: not in enabled drivers build config 00:01:07.092 net/dpaa: not in enabled drivers build config 00:01:07.092 net/dpaa2: not in enabled drivers build config 00:01:07.092 net/e1000: not in enabled drivers build config 00:01:07.092 net/ena: not in enabled drivers build config 00:01:07.092 net/enetc: not in enabled drivers build config 00:01:07.092 net/enetfec: not in enabled drivers build config 00:01:07.092 net/enic: not in enabled drivers build config 00:01:07.092 net/failsafe: not in enabled drivers build config 00:01:07.092 net/fm10k: not in enabled drivers build config 00:01:07.092 net/gve: not in enabled drivers build config 00:01:07.092 net/hinic: not in enabled drivers build config 00:01:07.092 net/hns3: not in enabled drivers build config 00:01:07.092 net/iavf: not in enabled drivers build config 00:01:07.092 net/ice: not in enabled drivers build config 00:01:07.092 net/idpf: not in enabled drivers build config 00:01:07.092 net/igc: not in enabled drivers build config 00:01:07.092 net/ionic: not in enabled drivers build config 00:01:07.092 net/ipn3ke: not in enabled drivers build config 00:01:07.092 net/ixgbe: not in enabled drivers build config 00:01:07.092 net/mana: not in enabled drivers build config 00:01:07.092 net/memif: not in enabled drivers build config 00:01:07.092 net/mlx4: not in enabled drivers build config 00:01:07.092 net/mlx5: not in enabled drivers build config 00:01:07.092 net/mvneta: not in enabled drivers build config 00:01:07.093 net/mvpp2: not in enabled drivers build config 00:01:07.093 net/netvsc: not in enabled drivers build config 00:01:07.093 net/nfb: not in enabled drivers build config 00:01:07.093 net/nfp: not in enabled drivers build config 00:01:07.093 net/ngbe: not in enabled drivers build config 00:01:07.093 net/null: not in enabled drivers build config 00:01:07.093 net/octeontx: not in enabled drivers build config 00:01:07.093 net/octeon_ep: not in enabled drivers build config 00:01:07.093 net/pcap: not in enabled drivers build config 00:01:07.093 net/pfe: not in enabled drivers build config 00:01:07.093 net/qede: not in enabled drivers build config 00:01:07.093 net/ring: not in enabled drivers build config 00:01:07.093 net/sfc: not in enabled drivers build config 00:01:07.093 net/softnic: not in enabled drivers build config 00:01:07.093 net/tap: not in enabled drivers build config 00:01:07.093 net/thunderx: not in enabled drivers build config 00:01:07.093 net/txgbe: not in enabled drivers build config 00:01:07.093 net/vdev_netvsc: not in enabled drivers build config 00:01:07.093 net/vhost: not in enabled drivers build config 00:01:07.093 net/virtio: not in enabled drivers build config 00:01:07.093 net/vmxnet3: not in enabled drivers build config 00:01:07.093 raw/cnxk_bphy: not in enabled drivers build config 00:01:07.093 raw/cnxk_gpio: not in enabled drivers build config 00:01:07.093 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:07.093 raw/ifpga: not in enabled drivers build config 00:01:07.093 raw/ntb: not in enabled drivers build config 00:01:07.093 raw/skeleton: not in enabled drivers build config 00:01:07.093 crypto/armv8: not in enabled drivers build config 00:01:07.093 crypto/bcmfs: not in enabled drivers build config 00:01:07.093 crypto/caam_jr: not in enabled drivers build config 00:01:07.093 crypto/ccp: not in enabled drivers build config 00:01:07.093 crypto/cnxk: not in enabled drivers build config 00:01:07.093 crypto/dpaa_sec: not in enabled drivers build config 00:01:07.093 crypto/dpaa2_sec: not in enabled drivers build config 00:01:07.093 crypto/ipsec_mb: not in enabled drivers build config 00:01:07.093 crypto/mlx5: not in enabled drivers build config 00:01:07.093 crypto/mvsam: not in enabled drivers build config 00:01:07.093 crypto/nitrox: not in enabled drivers build config 00:01:07.093 crypto/null: not in enabled drivers build config 00:01:07.093 crypto/octeontx: not in enabled drivers build config 00:01:07.093 crypto/openssl: not in enabled drivers build config 00:01:07.093 crypto/scheduler: not in enabled drivers build config 00:01:07.093 crypto/uadk: not in enabled drivers build config 00:01:07.093 crypto/virtio: not in enabled drivers build config 00:01:07.093 compress/isal: not in enabled drivers build config 00:01:07.093 compress/mlx5: not in enabled drivers build config 00:01:07.093 compress/octeontx: not in enabled drivers build config 00:01:07.093 compress/zlib: not in enabled drivers build config 00:01:07.093 regex/mlx5: not in enabled drivers build config 00:01:07.093 regex/cn9k: not in enabled drivers build config 00:01:07.093 ml/cnxk: not in enabled drivers build config 00:01:07.093 vdpa/ifc: not in enabled drivers build config 00:01:07.093 vdpa/mlx5: not in enabled drivers build config 00:01:07.093 vdpa/nfp: not in enabled drivers build config 00:01:07.093 vdpa/sfc: not in enabled drivers build config 00:01:07.093 event/cnxk: not in enabled drivers build config 00:01:07.093 event/dlb2: not in enabled drivers build config 00:01:07.093 event/dpaa: not in enabled drivers build config 00:01:07.093 event/dpaa2: not in enabled drivers build config 00:01:07.093 event/dsw: not in enabled drivers build config 00:01:07.093 event/opdl: not in enabled drivers build config 00:01:07.093 event/skeleton: not in enabled drivers build config 00:01:07.093 event/sw: not in enabled drivers build config 00:01:07.093 event/octeontx: not in enabled drivers build config 00:01:07.093 baseband/acc: not in enabled drivers build config 00:01:07.093 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:07.093 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:07.093 baseband/la12xx: not in enabled drivers build config 00:01:07.093 baseband/null: not in enabled drivers build config 00:01:07.093 baseband/turbo_sw: not in enabled drivers build config 00:01:07.093 gpu/cuda: not in enabled drivers build config 00:01:07.093 00:01:07.093 00:01:07.093 Build targets in project: 217 00:01:07.093 00:01:07.093 DPDK 23.11.0 00:01:07.093 00:01:07.093 User defined options 00:01:07.093 libdir : lib 00:01:07.093 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:07.093 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:07.093 c_link_args : 00:01:07.093 enable_docs : false 00:01:07.093 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:07.093 enable_kmods : false 00:01:07.093 machine : native 00:01:07.093 tests : false 00:01:07.093 00:01:07.093 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:07.093 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:07.093 02:27:10 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j72 00:01:07.356 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:07.356 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:07.356 [2/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:07.356 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:07.356 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:07.356 [5/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:07.356 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:07.356 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:07.356 [8/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:07.356 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:07.356 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:07.356 [11/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:07.356 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:07.356 [13/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:07.617 [14/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:07.617 [15/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:07.617 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:07.617 [17/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:07.617 [18/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:07.617 [19/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:07.617 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:07.617 [21/707] Linking static target lib/librte_kvargs.a 00:01:07.617 [22/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:07.617 [23/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:07.617 [24/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:07.617 [25/707] Linking static target lib/librte_log.a 00:01:07.885 [26/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:08.147 [27/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.147 [28/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:08.147 [29/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:08.147 [30/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:08.147 [31/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:08.147 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:08.147 [33/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:08.147 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:08.147 [35/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:08.147 [36/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:08.147 [37/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:08.147 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:08.147 [39/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:08.147 [40/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:08.147 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:08.147 [42/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:08.147 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:08.147 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:08.147 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:08.147 [46/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:08.147 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:08.147 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:08.147 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:08.147 [50/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:08.147 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:08.147 [52/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:08.147 [53/707] Linking static target lib/librte_ring.a 00:01:08.147 [54/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:08.147 [55/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:08.147 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:08.147 [57/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:08.147 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:08.147 [59/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:08.147 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:08.406 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:08.406 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:08.406 [63/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:08.407 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:08.407 [65/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:08.407 [66/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:08.407 [67/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:08.407 [68/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:08.407 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:08.407 [70/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:08.407 [71/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:08.407 [72/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:08.407 [73/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:08.407 [74/707] Linking static target lib/librte_meter.a 00:01:08.407 [75/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:08.407 [76/707] Linking static target lib/librte_pci.a 00:01:08.407 [77/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:08.407 [78/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:08.407 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:08.407 [80/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:08.407 [81/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:08.407 [82/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:08.407 [83/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:08.407 [84/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:08.407 [85/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:08.407 [86/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:08.407 [87/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:08.407 [88/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:08.407 [89/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:08.407 [90/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.407 [91/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:08.407 [92/707] Linking static target lib/librte_net.a 00:01:08.668 [93/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:08.668 [94/707] Linking target lib/librte_log.so.24.0 00:01:08.668 [95/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:08.668 [96/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:08.668 [97/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:08.668 [98/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:08.668 [99/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.668 [100/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:08.668 [101/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.668 [102/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:08.668 [103/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.668 [104/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:08.668 [105/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:08.668 [106/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:08.933 [107/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:08.933 [108/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:08.933 [109/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:08.933 [110/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:08.933 [111/707] Linking target lib/librte_kvargs.so.24.0 00:01:08.933 [112/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:08.933 [113/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:08.933 [114/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:08.933 [115/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:08.933 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:08.933 [117/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:08.933 [118/707] Linking static target lib/librte_cmdline.a 00:01:08.933 [119/707] Linking static target lib/librte_cfgfile.a 00:01:08.933 [120/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:08.933 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:08.933 [122/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.933 [123/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:08.933 [124/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:08.933 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:08.933 [126/707] Linking static target lib/librte_mempool.a 00:01:08.933 [127/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:08.934 [128/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:09.192 [129/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:09.192 [130/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:09.192 [131/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:09.192 [132/707] Linking static target lib/librte_metrics.a 00:01:09.192 [133/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:09.192 [134/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:09.192 [135/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:09.192 [136/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:09.192 [137/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:09.192 [138/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:09.192 [139/707] Linking static target lib/librte_eal.a 00:01:09.192 [140/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:09.192 [141/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:09.192 [142/707] Linking static target lib/librte_bitratestats.a 00:01:09.192 [143/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:09.456 [144/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:09.456 [145/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:09.456 [146/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:09.456 [147/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:09.456 [148/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:09.456 [149/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:09.456 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:09.456 [151/707] Linking static target lib/librte_mbuf.a 00:01:09.456 [152/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:09.457 [153/707] Linking static target lib/librte_bbdev.a 00:01:09.457 [154/707] Linking static target lib/librte_telemetry.a 00:01:09.457 [155/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:09.457 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:09.457 [157/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:09.457 [158/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:09.457 [159/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.457 [160/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:09.457 [161/707] Linking static target lib/librte_rcu.a 00:01:09.457 [162/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:09.457 [163/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:09.457 [164/707] Linking static target lib/librte_compressdev.a 00:01:09.457 [165/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:09.457 [166/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:09.716 [167/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.716 [168/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:09.716 [169/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:09.716 [170/707] Linking static target lib/librte_timer.a 00:01:09.716 [171/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:09.716 [172/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:09.716 [173/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:09.716 [174/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:09.716 [175/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:09.716 [176/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:09.716 [177/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:09.716 [178/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:09.716 [179/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:09.716 [180/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.716 [181/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:09.716 [182/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:09.716 [183/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:09.716 [184/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:09.978 [185/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:09.978 [186/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:09.978 [187/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:09.978 [188/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.978 [189/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:09.978 [190/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:09.978 [191/707] Linking static target lib/librte_dispatcher.a 00:01:09.978 [192/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:09.978 [193/707] Linking static target lib/librte_gso.a 00:01:09.978 [194/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:09.978 [195/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:09.978 [196/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:10.242 [197/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:10.242 [198/707] Linking static target lib/librte_distributor.a 00:01:10.242 [199/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:10.242 [200/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:10.242 [201/707] Linking static target lib/librte_jobstats.a 00:01:10.242 [202/707] Linking static target lib/librte_dmadev.a 00:01:10.242 [203/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:10.242 [204/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.242 [205/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:10.242 [206/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.242 [207/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:10.242 [208/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:10.242 [209/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:10.242 [210/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:10.242 [211/707] Linking static target lib/librte_gro.a 00:01:10.242 [212/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.242 [213/707] Linking static target lib/librte_gpudev.a 00:01:10.242 [214/707] Linking target lib/librte_telemetry.so.24.0 00:01:10.242 [215/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:10.242 [216/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:10.242 [217/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:10.242 [218/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:10.515 [219/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:10.515 [220/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:10.515 [221/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.515 [222/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.515 [223/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.515 [224/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.515 [225/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:10.515 [226/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:10.515 [227/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:10.515 [228/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:10.515 [229/707] Linking static target lib/librte_latencystats.a 00:01:10.515 [230/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:10.515 [231/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.515 [232/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:10.515 [233/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:10.515 [234/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:10.515 [235/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:10.515 [236/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:10.515 [237/707] Linking static target lib/librte_ip_frag.a 00:01:10.515 [238/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:10.515 [239/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:10.515 [240/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:10.515 [241/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:10.515 [242/707] Linking static target lib/librte_bpf.a 00:01:10.515 [243/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.515 [244/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.781 [245/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.781 [246/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:10.781 [247/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:10.781 [248/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:10.781 [249/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:10.781 [250/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:10.781 [251/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:10.781 [252/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:10.781 [253/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.781 [254/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:10.781 [255/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:10.781 [256/707] Linking static target lib/librte_regexdev.a 00:01:10.781 [257/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:10.781 [258/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.781 [259/707] Linking static target lib/librte_stack.a 00:01:10.781 [260/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:10.781 [261/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.046 [262/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:11.046 [263/707] Linking static target lib/librte_mldev.a 00:01:11.046 [264/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:11.046 [265/707] Linking static target lib/librte_pcapng.a 00:01:11.046 [266/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:11.046 [267/707] Linking static target lib/librte_rawdev.a 00:01:11.046 [268/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:11.046 [269/707] Linking static target lib/librte_power.a 00:01:11.046 [270/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:11.046 [271/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.046 [272/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:11.046 [273/707] Linking static target lib/librte_security.a 00:01:11.046 [274/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:11.046 [275/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.046 [276/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:11.046 [277/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:11.046 [278/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:11.312 [279/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:11.312 [280/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:11.312 [281/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:11.312 [282/707] Linking static target lib/librte_reorder.a 00:01:11.312 [283/707] Linking static target lib/librte_efd.a 00:01:11.312 [284/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.312 [285/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:11.312 [286/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:11.312 [287/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:11.312 [288/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:11.312 [289/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:11.312 [290/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:11.312 [291/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:11.312 [292/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:11.312 [293/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:11.312 [294/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.312 [295/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:11.312 [296/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:11.312 [297/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:11.574 [298/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:11.574 [299/707] Linking static target lib/librte_lpm.a 00:01:11.574 [300/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:11.574 [301/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:11.574 [302/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:11.574 [303/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:11.574 [304/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:11.574 [305/707] Linking static target lib/librte_rib.a 00:01:11.574 [306/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:11.574 [307/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.574 [308/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:11.574 [309/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:11.574 [310/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:11.574 [311/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:11.574 [312/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:11.842 [313/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.842 [314/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.842 [315/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.842 [316/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.842 [317/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:11.842 [318/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:12.101 [319/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:12.101 [320/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:12.101 [321/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:12.101 [322/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:12.101 [323/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.101 [324/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:12.101 [325/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:12.101 [326/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:12.101 [327/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:12.101 [328/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:12.101 [329/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:12.101 [330/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:12.101 [331/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.101 [332/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.101 [333/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:12.101 [334/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:12.361 [335/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:12.361 [336/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:12.361 [337/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:12.361 [338/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:12.361 [339/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:12.361 [340/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:12.361 [341/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:12.361 [342/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.361 [343/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:12.361 [344/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:12.361 [345/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:12.361 [346/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:12.361 [347/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:12.361 [348/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:12.625 [349/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:12.625 [350/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:12.625 [351/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:12.625 [352/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:12.625 [353/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:12.625 [354/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:12.625 [355/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:12.625 [356/707] Linking static target lib/librte_cryptodev.a 00:01:12.625 [357/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:12.625 [358/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:12.625 [359/707] Linking static target lib/librte_sched.a 00:01:12.625 [360/707] Linking static target lib/librte_fib.a 00:01:12.625 [361/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:12.625 [362/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:12.887 [363/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:12.887 [364/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:12.887 [365/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:12.887 [366/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:12.887 [367/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:12.887 [368/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:12.887 [369/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:12.887 [370/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:12.887 [371/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:12.887 [372/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:12.887 [373/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:13.152 [374/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:13.152 [375/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:13.152 [376/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:13.152 [377/707] Linking static target lib/librte_pdump.a 00:01:13.152 [378/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:13.152 [379/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:13.152 [380/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:13.152 [381/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.152 [382/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.152 [383/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:13.152 [384/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:13.152 [385/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:13.152 [386/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:13.152 [387/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:13.152 [388/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:13.152 [389/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:13.152 [390/707] Linking static target lib/librte_graph.a 00:01:13.413 [391/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:13.413 [392/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:13.413 [393/707] Linking static target lib/acl/libavx2_tmp.a 00:01:13.413 [394/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.413 [395/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:13.413 [396/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:13.413 [397/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:13.413 [398/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:13.413 [399/707] Linking static target lib/librte_hash.a 00:01:13.413 [400/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:13.413 [401/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:13.413 [402/707] Linking static target lib/librte_table.a 00:01:13.413 [403/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:13.413 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:13.413 [405/707] Linking static target lib/librte_member.a 00:01:13.413 [406/707] Linking static target lib/librte_ipsec.a 00:01:13.413 [407/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:13.677 [408/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:13.677 [409/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:13.677 [410/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:13.677 [411/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:13.677 [412/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.677 [413/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:13.677 [414/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.677 [415/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.677 [416/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:13.677 [417/707] Linking static target drivers/librte_bus_vdev.a 00:01:13.677 [418/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:13.677 [419/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:13.677 [420/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:13.677 [421/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:13.677 [422/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:13.677 [423/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:13.677 [424/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:13.677 [425/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:13.677 [426/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:13.937 [427/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:13.937 [428/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:13.937 [429/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:13.937 [430/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:13.937 [431/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:13.937 [432/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:13.937 [433/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:13.937 [434/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:13.937 [435/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:13.937 [436/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:13.937 [437/707] Linking static target lib/librte_eventdev.a 00:01:13.937 [438/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:13.937 [439/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:13.937 [440/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:13.937 [441/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:13.937 [442/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:13.937 [443/707] Linking static target lib/librte_node.a 00:01:13.937 [444/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.937 [445/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:13.937 [446/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:13.937 [447/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.937 [448/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:13.937 [449/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:14.205 [450/707] Linking static target drivers/librte_bus_pci.a 00:01:14.205 [451/707] Linking static target lib/librte_pdcp.a 00:01:14.205 [452/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:14.205 [453/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.205 [454/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.205 [455/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:14.205 [456/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.205 [457/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:14.205 [458/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:14.205 [459/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:14.205 [460/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:14.205 [461/707] Linking static target drivers/librte_mempool_ring.a 00:01:14.205 [462/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:14.205 [463/707] Linking static target lib/librte_acl.a 00:01:14.471 [464/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:14.471 [465/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:14.471 [466/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:14.471 [467/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:14.471 [468/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.471 [469/707] Linking static target lib/librte_port.a 00:01:14.471 [470/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:14.471 [471/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:14.471 [472/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.471 [473/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:14.471 [474/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:14.471 [475/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:14.471 [476/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.471 [477/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:14.741 [478/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:14.741 [479/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:14.741 [480/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.741 [481/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:14.741 [482/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.741 [483/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:14.741 [484/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:14.741 [485/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:14.741 [486/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.741 [487/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:14.741 [488/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:15.008 [489/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.008 [490/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:15.008 [491/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:15.008 [492/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.008 [493/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:15.008 [494/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:15.008 [495/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:15.008 [496/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:15.008 [497/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:15.008 [498/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:15.271 [499/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:15.271 [500/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:15.271 [501/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:15.271 [502/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:15.271 [503/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:15.271 [504/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:15.271 [505/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:15.271 [506/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:15.271 [507/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:15.271 [508/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:15.271 [509/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:15.271 [510/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:15.271 [511/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:15.271 [512/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:15.271 [513/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:15.271 [514/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:15.271 [515/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:15.530 [516/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:15.530 [517/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.530 [518/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:15.530 [519/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:15.530 [520/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:15.530 [521/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:15.530 [522/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:15.530 [523/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:15.530 [524/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:15.530 [525/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:15.530 [526/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:15.530 [527/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:15.530 [528/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:15.530 [529/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:15.530 [530/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:15.789 [531/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:15.789 [532/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:15.789 [533/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:15.789 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:15.789 [535/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:15.789 [536/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:15.789 [537/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:15.789 [538/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:15.789 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:16.047 [540/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:16.047 [541/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:16.047 [542/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:16.047 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:16.047 [544/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:16.047 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:16.047 [546/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:16.047 [547/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:16.047 [548/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:16.047 [549/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:16.047 [550/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:16.047 [551/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:16.047 [552/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:16.047 [553/707] Linking static target lib/librte_ethdev.a 00:01:16.047 [554/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:16.047 [555/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:16.047 [556/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:16.047 [557/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:16.047 [558/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:16.047 [559/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:16.047 [560/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:16.047 [561/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:16.307 [562/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:16.307 [563/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:16.307 [564/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:16.307 [565/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:16.307 [566/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:16.307 [567/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:16.307 [568/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:16.566 [569/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:16.566 [570/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:16.826 [571/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:17.086 [572/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:17.086 [573/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:17.086 [574/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:17.346 [575/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:17.605 [576/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.605 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:17.605 [578/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:17.864 [579/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:18.434 [580/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:18.694 [581/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:18.953 [582/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:18.954 [583/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:19.521 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:19.521 [585/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:19.521 [586/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:19.521 [587/707] Linking static target drivers/librte_net_i40e.a 00:01:19.521 [588/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:19.521 [589/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:20.897 [590/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:20.897 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.506 [592/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.506 [593/707] Linking target lib/librte_eal.so.24.0 00:01:21.806 [594/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:21.806 [595/707] Linking target lib/librte_ring.so.24.0 00:01:21.806 [596/707] Linking target lib/librte_meter.so.24.0 00:01:21.806 [597/707] Linking target lib/librte_pci.so.24.0 00:01:21.806 [598/707] Linking target lib/librte_rawdev.so.24.0 00:01:21.806 [599/707] Linking target lib/librte_dmadev.so.24.0 00:01:21.806 [600/707] Linking target lib/librte_cfgfile.so.24.0 00:01:21.806 [601/707] Linking target lib/librte_timer.so.24.0 00:01:21.806 [602/707] Linking target lib/librte_jobstats.so.24.0 00:01:21.806 [603/707] Linking target lib/librte_stack.so.24.0 00:01:21.806 [604/707] Linking target drivers/librte_bus_vdev.so.24.0 00:01:21.806 [605/707] Linking target lib/librte_acl.so.24.0 00:01:21.806 [606/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:21.806 [607/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:21.806 [608/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:21.806 [609/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:21.806 [610/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:21.806 [611/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:21.806 [612/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:22.066 [613/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:22.066 [614/707] Linking target drivers/librte_bus_pci.so.24.0 00:01:22.066 [615/707] Linking target lib/librte_mempool.so.24.0 00:01:22.066 [616/707] Linking target lib/librte_rcu.so.24.0 00:01:22.066 [617/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:22.066 [618/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:22.066 [619/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:22.325 [620/707] Linking target lib/librte_mbuf.so.24.0 00:01:22.325 [621/707] Linking target drivers/librte_mempool_ring.so.24.0 00:01:22.325 [622/707] Linking target lib/librte_rib.so.24.0 00:01:22.325 [623/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:22.325 [624/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:22.325 [625/707] Linking target lib/librte_compressdev.so.24.0 00:01:22.325 [626/707] Linking target lib/librte_net.so.24.0 00:01:22.325 [627/707] Linking target lib/librte_distributor.so.24.0 00:01:22.325 [628/707] Linking target lib/librte_bbdev.so.24.0 00:01:22.325 [629/707] Linking target lib/librte_regexdev.so.24.0 00:01:22.325 [630/707] Linking target lib/librte_gpudev.so.24.0 00:01:22.325 [631/707] Linking target lib/librte_mldev.so.24.0 00:01:22.325 [632/707] Linking target lib/librte_reorder.so.24.0 00:01:22.325 [633/707] Linking target lib/librte_cryptodev.so.24.0 00:01:22.325 [634/707] Linking target lib/librte_sched.so.24.0 00:01:22.325 [635/707] Linking target lib/librte_fib.so.24.0 00:01:22.584 [636/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:22.584 [637/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:22.584 [638/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:22.584 [639/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:22.584 [640/707] Linking target lib/librte_security.so.24.0 00:01:22.584 [641/707] Linking target lib/librte_cmdline.so.24.0 00:01:22.584 [642/707] Linking target lib/librte_hash.so.24.0 00:01:22.843 [643/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:22.843 [644/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:22.843 [645/707] Linking target lib/librte_pdcp.so.24.0 00:01:22.843 [646/707] Linking target lib/librte_efd.so.24.0 00:01:22.843 [647/707] Linking target lib/librte_lpm.so.24.0 00:01:22.843 [648/707] Linking target lib/librte_member.so.24.0 00:01:22.843 [649/707] Linking target lib/librte_ipsec.so.24.0 00:01:23.102 [650/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:23.102 [651/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:25.004 [652/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.004 [653/707] Linking target lib/librte_ethdev.so.24.0 00:01:25.263 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:25.263 [655/707] Linking target lib/librte_gro.so.24.0 00:01:25.263 [656/707] Linking target lib/librte_ip_frag.so.24.0 00:01:25.263 [657/707] Linking target lib/librte_metrics.so.24.0 00:01:25.263 [658/707] Linking target lib/librte_bpf.so.24.0 00:01:25.263 [659/707] Linking target lib/librte_gso.so.24.0 00:01:25.263 [660/707] Linking target lib/librte_pcapng.so.24.0 00:01:25.263 [661/707] Linking target lib/librte_power.so.24.0 00:01:25.263 [662/707] Linking target lib/librte_eventdev.so.24.0 00:01:25.263 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:01:25.523 [664/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:25.523 [665/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:25.523 [666/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:25.523 [667/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:25.523 [668/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:25.523 [669/707] Linking target lib/librte_latencystats.so.24.0 00:01:25.523 [670/707] Linking target lib/librte_pdump.so.24.0 00:01:25.523 [671/707] Linking target lib/librte_bitratestats.so.24.0 00:01:25.523 [672/707] Linking target lib/librte_graph.so.24.0 00:01:25.523 [673/707] Linking target lib/librte_dispatcher.so.24.0 00:01:25.523 [674/707] Linking target lib/librte_port.so.24.0 00:01:25.782 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:25.782 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:25.782 [677/707] Linking target lib/librte_node.so.24.0 00:01:25.782 [678/707] Linking target lib/librte_table.so.24.0 00:01:26.041 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:30.230 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:30.230 [681/707] Linking static target lib/librte_pipeline.a 00:01:30.489 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:30.489 [683/707] Linking static target lib/librte_vhost.a 00:01:31.054 [684/707] Linking target app/dpdk-test-fib 00:01:31.055 [685/707] Linking target app/dpdk-test-compress-perf 00:01:31.055 [686/707] Linking target app/dpdk-dumpcap 00:01:31.055 [687/707] Linking target app/dpdk-test-cmdline 00:01:31.055 [688/707] Linking target app/dpdk-pdump 00:01:31.055 [689/707] Linking target app/dpdk-test-sad 00:01:31.055 [690/707] Linking target app/dpdk-test-crypto-perf 00:01:31.055 [691/707] Linking target app/dpdk-test-security-perf 00:01:31.055 [692/707] Linking target app/dpdk-test-regex 00:01:31.055 [693/707] Linking target app/dpdk-graph 00:01:31.055 [694/707] Linking target app/dpdk-test-acl 00:01:31.055 [695/707] Linking target app/dpdk-proc-info 00:01:31.055 [696/707] Linking target app/dpdk-test-dma-perf 00:01:31.313 [697/707] Linking target app/dpdk-test-mldev 00:01:31.313 [698/707] Linking target app/dpdk-test-flow-perf 00:01:31.313 [699/707] Linking target app/dpdk-test-pipeline 00:01:31.313 [700/707] Linking target app/dpdk-test-bbdev 00:01:31.313 [701/707] Linking target app/dpdk-test-eventdev 00:01:31.313 [702/707] Linking target app/dpdk-test-gpudev 00:01:31.313 [703/707] Linking target app/dpdk-testpmd 00:01:33.219 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.219 [705/707] Linking target lib/librte_vhost.so.24.0 00:01:35.755 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.755 [707/707] Linking target lib/librte_pipeline.so.24.0 00:01:35.755 02:27:38 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j72 install 00:01:35.755 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:35.755 [0/1] Installing files. 00:01:36.018 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.018 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:36.019 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.020 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.021 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:36.022 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:36.023 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:36.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:36.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:36.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:36.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:36.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:36.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:36.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:36.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:36.024 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:36.024 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.024 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.283 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:36.284 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:36.284 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:36.284 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:36.284 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:36.284 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.284 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:36.547 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.548 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.549 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.550 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:01:36.551 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:01:36.551 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:36.551 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so 00:01:36.551 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:36.551 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:36.551 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:36.551 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:36.551 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:36.551 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:36.551 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:36.551 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:36.551 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:36.551 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:36.551 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:36.551 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:36.551 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:36.551 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:36.551 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:36.551 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:01:36.551 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:36.551 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:36.551 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:36.551 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:36.551 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:36.551 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:36.551 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:36.551 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:36.551 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:36.551 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:36.551 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:36.551 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:36.551 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:36.551 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:36.551 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:36.551 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:36.551 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:36.551 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:36.551 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:36.551 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:36.551 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:36.551 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:36.551 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:36.551 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:36.551 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:36.551 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:36.551 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:36.551 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:36.551 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:36.551 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:36.551 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:36.551 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:36.552 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:36.552 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:36.552 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:36.552 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:36.552 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:36.552 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:36.552 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:36.552 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:36.552 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:36.552 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:36.552 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:36.552 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:36.552 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:36.552 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:36.552 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:36.552 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:36.552 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:36.552 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:36.552 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:36.552 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:36.552 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:36.552 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:36.552 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:36.552 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:36.552 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:36.552 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:36.552 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:36.552 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:36.552 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:36.552 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:36.552 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:36.552 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:36.552 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:36.552 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:01:36.552 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:36.552 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:36.552 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:36.552 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:01:36.552 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:36.552 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:36.552 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:36.552 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:36.552 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:36.552 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:36.552 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:36.552 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:36.552 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:36.552 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:36.552 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:36.552 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:36.552 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:36.552 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:01:36.552 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:36.552 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:36.552 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:36.552 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:36.552 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:36.552 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:36.552 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:36.552 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:36.552 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:36.552 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:36.552 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:36.552 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:01:36.552 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:36.552 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:36.552 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:36.552 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:01:36.552 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:36.552 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:36.552 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:36.552 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:36.552 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:36.552 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:01:36.552 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:36.552 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:36.552 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:36.552 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:36.552 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:36.552 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:36.552 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:36.552 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:36.552 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:36.553 02:27:39 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:01:36.553 02:27:39 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:36.553 02:27:39 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:01:36.553 02:27:39 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:36.553 00:01:36.553 real 0m36.361s 00:01:36.553 user 10m1.230s 00:01:36.553 sys 2m10.910s 00:01:36.553 02:27:39 build_native_dpdk -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:01:36.553 02:27:39 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:36.553 ************************************ 00:01:36.553 END TEST build_native_dpdk 00:01:36.553 ************************************ 00:01:36.812 02:27:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:36.812 02:27:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:36.812 02:27:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:36.812 02:27:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:36.812 02:27:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:36.812 02:27:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:36.812 02:27:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:36.812 02:27:39 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:01:36.812 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:37.071 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:37.072 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:37.072 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:37.330 Using 'verbs' RDMA provider 00:01:53.281 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:08.190 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:08.190 Creating mk/config.mk...done. 00:02:08.190 Creating mk/cc.flags.mk...done. 00:02:08.190 Type 'make' to build. 00:02:08.190 02:28:10 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:02:08.190 02:28:10 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:02:08.190 02:28:10 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:02:08.190 02:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.190 ************************************ 00:02:08.190 START TEST make 00:02:08.190 ************************************ 00:02:08.190 02:28:10 make -- common/autotest_common.sh@1122 -- $ make -j72 00:02:08.190 make[1]: Nothing to be done for 'all'. 00:02:23.084 CC lib/ut_mock/mock.o 00:02:23.084 CC lib/log/log.o 00:02:23.084 CC lib/log/log_flags.o 00:02:23.084 CC lib/ut/ut.o 00:02:23.084 CC lib/log/log_deprecated.o 00:02:23.084 LIB libspdk_ut.a 00:02:23.084 LIB libspdk_ut_mock.a 00:02:23.084 SO libspdk_ut.so.2.0 00:02:23.084 LIB libspdk_log.a 00:02:23.084 SO libspdk_ut_mock.so.6.0 00:02:23.084 SO libspdk_log.so.7.0 00:02:23.084 SYMLINK libspdk_ut.so 00:02:23.084 SYMLINK libspdk_ut_mock.so 00:02:23.084 SYMLINK libspdk_log.so 00:02:23.084 CC lib/dma/dma.o 00:02:23.084 CC lib/ioat/ioat.o 00:02:23.084 CC lib/util/base64.o 00:02:23.084 CC lib/util/bit_array.o 00:02:23.084 CC lib/util/crc16.o 00:02:23.084 CC lib/util/cpuset.o 00:02:23.084 CXX lib/trace_parser/trace.o 00:02:23.084 CC lib/util/crc32.o 00:02:23.084 CC lib/util/crc32c.o 00:02:23.084 CC lib/util/crc32_ieee.o 00:02:23.084 CC lib/util/crc64.o 00:02:23.084 CC lib/util/dif.o 00:02:23.084 CC lib/util/fd.o 00:02:23.084 CC lib/util/file.o 00:02:23.084 CC lib/util/hexlify.o 00:02:23.084 CC lib/util/iov.o 00:02:23.084 CC lib/util/math.o 00:02:23.084 CC lib/util/pipe.o 00:02:23.084 CC lib/util/strerror_tls.o 00:02:23.084 CC lib/util/string.o 00:02:23.084 CC lib/util/uuid.o 00:02:23.084 CC lib/util/fd_group.o 00:02:23.084 CC lib/util/xor.o 00:02:23.084 CC lib/util/zipf.o 00:02:23.084 CC lib/vfio_user/host/vfio_user_pci.o 00:02:23.084 CC lib/vfio_user/host/vfio_user.o 00:02:23.084 LIB libspdk_dma.a 00:02:23.084 SO libspdk_dma.so.4.0 00:02:23.084 LIB libspdk_ioat.a 00:02:23.084 SYMLINK libspdk_dma.so 00:02:23.084 SO libspdk_ioat.so.7.0 00:02:23.084 SYMLINK libspdk_ioat.so 00:02:23.084 LIB libspdk_vfio_user.a 00:02:23.084 SO libspdk_vfio_user.so.5.0 00:02:23.084 LIB libspdk_util.a 00:02:23.084 SYMLINK libspdk_vfio_user.so 00:02:23.084 SO libspdk_util.so.9.0 00:02:23.084 SYMLINK libspdk_util.so 00:02:23.084 LIB libspdk_trace_parser.a 00:02:23.084 SO libspdk_trace_parser.so.5.0 00:02:23.084 SYMLINK libspdk_trace_parser.so 00:02:23.084 CC lib/idxd/idxd.o 00:02:23.084 CC lib/idxd/idxd_user.o 00:02:23.084 CC lib/vmd/vmd.o 00:02:23.084 CC lib/conf/conf.o 00:02:23.084 CC lib/vmd/led.o 00:02:23.084 CC lib/json/json_parse.o 00:02:23.084 CC lib/env_dpdk/env.o 00:02:23.084 CC lib/rdma/common.o 00:02:23.084 CC lib/json/json_util.o 00:02:23.084 CC lib/env_dpdk/memory.o 00:02:23.084 CC lib/rdma/rdma_verbs.o 00:02:23.084 CC lib/env_dpdk/pci.o 00:02:23.084 CC lib/json/json_write.o 00:02:23.084 CC lib/env_dpdk/init.o 00:02:23.084 CC lib/env_dpdk/threads.o 00:02:23.084 CC lib/env_dpdk/pci_ioat.o 00:02:23.084 CC lib/env_dpdk/pci_virtio.o 00:02:23.084 CC lib/env_dpdk/pci_vmd.o 00:02:23.343 CC lib/env_dpdk/pci_idxd.o 00:02:23.343 CC lib/env_dpdk/pci_event.o 00:02:23.344 CC lib/env_dpdk/sigbus_handler.o 00:02:23.344 CC lib/env_dpdk/pci_dpdk.o 00:02:23.344 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:23.344 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:23.603 LIB libspdk_conf.a 00:02:23.603 SO libspdk_conf.so.6.0 00:02:23.603 LIB libspdk_rdma.a 00:02:23.603 LIB libspdk_json.a 00:02:23.603 SO libspdk_rdma.so.6.0 00:02:23.603 SYMLINK libspdk_conf.so 00:02:23.603 SO libspdk_json.so.6.0 00:02:23.603 SYMLINK libspdk_rdma.so 00:02:23.603 SYMLINK libspdk_json.so 00:02:23.862 LIB libspdk_idxd.a 00:02:23.862 SO libspdk_idxd.so.12.0 00:02:23.862 LIB libspdk_vmd.a 00:02:23.862 SYMLINK libspdk_idxd.so 00:02:23.862 SO libspdk_vmd.so.6.0 00:02:24.122 SYMLINK libspdk_vmd.so 00:02:24.122 CC lib/jsonrpc/jsonrpc_server.o 00:02:24.122 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:24.122 CC lib/jsonrpc/jsonrpc_client.o 00:02:24.122 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:24.382 LIB libspdk_jsonrpc.a 00:02:24.382 SO libspdk_jsonrpc.so.6.0 00:02:24.382 SYMLINK libspdk_jsonrpc.so 00:02:24.640 LIB libspdk_env_dpdk.a 00:02:24.640 SO libspdk_env_dpdk.so.14.0 00:02:24.900 CC lib/rpc/rpc.o 00:02:24.900 SYMLINK libspdk_env_dpdk.so 00:02:25.159 LIB libspdk_rpc.a 00:02:25.159 SO libspdk_rpc.so.6.0 00:02:25.159 SYMLINK libspdk_rpc.so 00:02:25.728 CC lib/notify/notify.o 00:02:25.728 CC lib/notify/notify_rpc.o 00:02:25.728 CC lib/trace/trace.o 00:02:25.728 CC lib/trace/trace_flags.o 00:02:25.728 CC lib/trace/trace_rpc.o 00:02:25.728 CC lib/keyring/keyring.o 00:02:25.728 CC lib/keyring/keyring_rpc.o 00:02:25.728 LIB libspdk_notify.a 00:02:25.728 SO libspdk_notify.so.6.0 00:02:25.728 LIB libspdk_keyring.a 00:02:25.728 LIB libspdk_trace.a 00:02:25.988 SYMLINK libspdk_notify.so 00:02:25.988 SO libspdk_keyring.so.1.0 00:02:25.988 SO libspdk_trace.so.10.0 00:02:25.988 SYMLINK libspdk_keyring.so 00:02:25.988 SYMLINK libspdk_trace.so 00:02:26.247 CC lib/thread/thread.o 00:02:26.247 CC lib/thread/iobuf.o 00:02:26.247 CC lib/sock/sock.o 00:02:26.247 CC lib/sock/sock_rpc.o 00:02:26.816 LIB libspdk_sock.a 00:02:26.816 SO libspdk_sock.so.9.0 00:02:26.816 SYMLINK libspdk_sock.so 00:02:27.384 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:27.384 CC lib/nvme/nvme_ctrlr.o 00:02:27.384 CC lib/nvme/nvme_fabric.o 00:02:27.384 CC lib/nvme/nvme_ns_cmd.o 00:02:27.384 CC lib/nvme/nvme_ns.o 00:02:27.384 CC lib/nvme/nvme_pcie_common.o 00:02:27.384 CC lib/nvme/nvme_pcie.o 00:02:27.384 CC lib/nvme/nvme_qpair.o 00:02:27.384 CC lib/nvme/nvme.o 00:02:27.384 CC lib/nvme/nvme_quirks.o 00:02:27.384 CC lib/nvme/nvme_transport.o 00:02:27.384 CC lib/nvme/nvme_discovery.o 00:02:27.384 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:27.384 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:27.384 CC lib/nvme/nvme_tcp.o 00:02:27.384 CC lib/nvme/nvme_opal.o 00:02:27.384 CC lib/nvme/nvme_io_msg.o 00:02:27.384 CC lib/nvme/nvme_zns.o 00:02:27.384 CC lib/nvme/nvme_poll_group.o 00:02:27.384 CC lib/nvme/nvme_stubs.o 00:02:27.384 CC lib/nvme/nvme_auth.o 00:02:27.384 CC lib/nvme/nvme_cuse.o 00:02:27.384 CC lib/nvme/nvme_rdma.o 00:02:27.952 LIB libspdk_thread.a 00:02:27.952 SO libspdk_thread.so.10.0 00:02:27.952 SYMLINK libspdk_thread.so 00:02:28.521 CC lib/blob/blobstore.o 00:02:28.521 CC lib/blob/request.o 00:02:28.521 CC lib/blob/zeroes.o 00:02:28.521 CC lib/blob/blob_bs_dev.o 00:02:28.521 CC lib/init/json_config.o 00:02:28.521 CC lib/virtio/virtio.o 00:02:28.521 CC lib/init/subsystem.o 00:02:28.521 CC lib/accel/accel.o 00:02:28.521 CC lib/virtio/virtio_vhost_user.o 00:02:28.521 CC lib/virtio/virtio_vfio_user.o 00:02:28.521 CC lib/accel/accel_rpc.o 00:02:28.521 CC lib/init/subsystem_rpc.o 00:02:28.521 CC lib/virtio/virtio_pci.o 00:02:28.521 CC lib/init/rpc.o 00:02:28.521 CC lib/accel/accel_sw.o 00:02:28.521 LIB libspdk_init.a 00:02:28.780 SO libspdk_init.so.5.0 00:02:28.780 LIB libspdk_virtio.a 00:02:28.780 SYMLINK libspdk_init.so 00:02:28.780 SO libspdk_virtio.so.7.0 00:02:28.780 SYMLINK libspdk_virtio.so 00:02:29.039 LIB libspdk_nvme.a 00:02:29.039 CC lib/event/app.o 00:02:29.039 CC lib/event/reactor.o 00:02:29.039 CC lib/event/log_rpc.o 00:02:29.039 CC lib/event/app_rpc.o 00:02:29.039 CC lib/event/scheduler_static.o 00:02:29.298 SO libspdk_nvme.so.13.0 00:02:29.556 LIB libspdk_accel.a 00:02:29.556 SO libspdk_accel.so.15.0 00:02:29.556 SYMLINK libspdk_nvme.so 00:02:29.556 LIB libspdk_event.a 00:02:29.556 SYMLINK libspdk_accel.so 00:02:29.556 SO libspdk_event.so.13.0 00:02:29.816 SYMLINK libspdk_event.so 00:02:29.816 CC lib/bdev/bdev_rpc.o 00:02:29.816 CC lib/bdev/bdev.o 00:02:29.816 CC lib/bdev/bdev_zone.o 00:02:29.816 CC lib/bdev/part.o 00:02:29.816 CC lib/bdev/scsi_nvme.o 00:02:31.195 LIB libspdk_blob.a 00:02:31.454 SO libspdk_blob.so.11.0 00:02:31.454 SYMLINK libspdk_blob.so 00:02:32.023 CC lib/lvol/lvol.o 00:02:32.023 CC lib/blobfs/blobfs.o 00:02:32.023 CC lib/blobfs/tree.o 00:02:32.592 LIB libspdk_bdev.a 00:02:32.592 SO libspdk_bdev.so.15.0 00:02:32.853 LIB libspdk_blobfs.a 00:02:32.853 SYMLINK libspdk_bdev.so 00:02:32.853 SO libspdk_blobfs.so.10.0 00:02:32.853 LIB libspdk_lvol.a 00:02:32.853 SO libspdk_lvol.so.10.0 00:02:32.853 SYMLINK libspdk_blobfs.so 00:02:32.853 SYMLINK libspdk_lvol.so 00:02:33.117 CC lib/nbd/nbd.o 00:02:33.117 CC lib/nbd/nbd_rpc.o 00:02:33.117 CC lib/nvmf/ctrlr.o 00:02:33.117 CC lib/nvmf/ctrlr_discovery.o 00:02:33.117 CC lib/nvmf/ctrlr_bdev.o 00:02:33.117 CC lib/nvmf/subsystem.o 00:02:33.117 CC lib/nvmf/nvmf.o 00:02:33.118 CC lib/scsi/dev.o 00:02:33.118 CC lib/nvmf/nvmf_rpc.o 00:02:33.118 CC lib/ublk/ublk.o 00:02:33.118 CC lib/scsi/lun.o 00:02:33.118 CC lib/nvmf/transport.o 00:02:33.118 CC lib/nvmf/tcp.o 00:02:33.118 CC lib/scsi/port.o 00:02:33.118 CC lib/ublk/ublk_rpc.o 00:02:33.118 CC lib/scsi/scsi.o 00:02:33.118 CC lib/nvmf/stubs.o 00:02:33.118 CC lib/scsi/scsi_bdev.o 00:02:33.118 CC lib/ftl/ftl_core.o 00:02:33.118 CC lib/nvmf/mdns_server.o 00:02:33.118 CC lib/ftl/ftl_init.o 00:02:33.118 CC lib/ftl/ftl_layout.o 00:02:33.118 CC lib/nvmf/rdma.o 00:02:33.118 CC lib/scsi/scsi_pr.o 00:02:33.118 CC lib/nvmf/auth.o 00:02:33.118 CC lib/ftl/ftl_debug.o 00:02:33.118 CC lib/scsi/scsi_rpc.o 00:02:33.118 CC lib/scsi/task.o 00:02:33.118 CC lib/ftl/ftl_io.o 00:02:33.118 CC lib/ftl/ftl_sb.o 00:02:33.118 CC lib/ftl/ftl_l2p.o 00:02:33.118 CC lib/ftl/ftl_l2p_flat.o 00:02:33.118 CC lib/ftl/ftl_nv_cache.o 00:02:33.118 CC lib/ftl/ftl_band.o 00:02:33.118 CC lib/ftl/ftl_band_ops.o 00:02:33.118 CC lib/ftl/ftl_writer.o 00:02:33.118 CC lib/ftl/ftl_reloc.o 00:02:33.118 CC lib/ftl/ftl_rq.o 00:02:33.118 CC lib/ftl/ftl_l2p_cache.o 00:02:33.118 CC lib/ftl/ftl_p2l.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:33.118 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:33.118 CC lib/ftl/utils/ftl_md.o 00:02:33.118 CC lib/ftl/utils/ftl_conf.o 00:02:33.118 CC lib/ftl/utils/ftl_mempool.o 00:02:33.118 CC lib/ftl/utils/ftl_bitmap.o 00:02:33.118 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:33.118 CC lib/ftl/utils/ftl_property.o 00:02:33.118 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:33.118 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:33.118 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:33.118 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:33.118 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:33.118 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:33.118 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:33.118 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:33.118 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:33.118 CC lib/ftl/base/ftl_base_bdev.o 00:02:33.118 CC lib/ftl/ftl_trace.o 00:02:33.118 CC lib/ftl/base/ftl_base_dev.o 00:02:34.055 LIB libspdk_nbd.a 00:02:34.055 LIB libspdk_scsi.a 00:02:34.055 SO libspdk_nbd.so.7.0 00:02:34.055 SO libspdk_scsi.so.9.0 00:02:34.055 SYMLINK libspdk_nbd.so 00:02:34.055 LIB libspdk_ublk.a 00:02:34.055 SYMLINK libspdk_scsi.so 00:02:34.055 SO libspdk_ublk.so.3.0 00:02:34.055 SYMLINK libspdk_ublk.so 00:02:34.315 LIB libspdk_ftl.a 00:02:34.315 CC lib/vhost/vhost.o 00:02:34.315 CC lib/vhost/vhost_rpc.o 00:02:34.315 CC lib/vhost/vhost_scsi.o 00:02:34.315 CC lib/vhost/vhost_blk.o 00:02:34.315 CC lib/vhost/rte_vhost_user.o 00:02:34.315 CC lib/iscsi/conn.o 00:02:34.315 CC lib/iscsi/init_grp.o 00:02:34.315 CC lib/iscsi/iscsi.o 00:02:34.315 CC lib/iscsi/md5.o 00:02:34.315 CC lib/iscsi/param.o 00:02:34.315 CC lib/iscsi/tgt_node.o 00:02:34.315 CC lib/iscsi/portal_grp.o 00:02:34.315 CC lib/iscsi/iscsi_subsystem.o 00:02:34.315 CC lib/iscsi/iscsi_rpc.o 00:02:34.315 CC lib/iscsi/task.o 00:02:34.574 SO libspdk_ftl.so.9.0 00:02:34.833 SYMLINK libspdk_ftl.so 00:02:35.402 LIB libspdk_nvmf.a 00:02:35.402 LIB libspdk_vhost.a 00:02:35.662 SO libspdk_nvmf.so.18.0 00:02:35.662 SO libspdk_vhost.so.8.0 00:02:35.662 SYMLINK libspdk_vhost.so 00:02:35.662 SYMLINK libspdk_nvmf.so 00:02:35.921 LIB libspdk_iscsi.a 00:02:35.921 SO libspdk_iscsi.so.8.0 00:02:36.182 SYMLINK libspdk_iscsi.so 00:02:36.751 CC module/env_dpdk/env_dpdk_rpc.o 00:02:36.751 CC module/accel/error/accel_error.o 00:02:36.751 CC module/accel/error/accel_error_rpc.o 00:02:36.751 CC module/blob/bdev/blob_bdev.o 00:02:36.751 CC module/accel/dsa/accel_dsa.o 00:02:36.751 CC module/accel/dsa/accel_dsa_rpc.o 00:02:36.751 CC module/sock/posix/posix.o 00:02:36.751 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:36.751 CC module/accel/ioat/accel_ioat_rpc.o 00:02:36.751 CC module/accel/ioat/accel_ioat.o 00:02:36.751 LIB libspdk_env_dpdk_rpc.a 00:02:36.751 CC module/accel/iaa/accel_iaa.o 00:02:36.751 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:36.751 CC module/accel/iaa/accel_iaa_rpc.o 00:02:36.751 CC module/scheduler/gscheduler/gscheduler.o 00:02:36.751 CC module/keyring/file/keyring.o 00:02:36.751 CC module/keyring/file/keyring_rpc.o 00:02:36.751 SO libspdk_env_dpdk_rpc.so.6.0 00:02:37.009 SYMLINK libspdk_env_dpdk_rpc.so 00:02:37.009 LIB libspdk_accel_ioat.a 00:02:37.009 LIB libspdk_scheduler_gscheduler.a 00:02:37.009 LIB libspdk_accel_error.a 00:02:37.009 LIB libspdk_keyring_file.a 00:02:37.010 LIB libspdk_scheduler_dpdk_governor.a 00:02:37.010 SO libspdk_scheduler_gscheduler.so.4.0 00:02:37.010 LIB libspdk_scheduler_dynamic.a 00:02:37.010 SO libspdk_accel_ioat.so.6.0 00:02:37.010 SO libspdk_accel_error.so.2.0 00:02:37.010 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:37.010 LIB libspdk_accel_iaa.a 00:02:37.010 SO libspdk_keyring_file.so.1.0 00:02:37.010 SO libspdk_scheduler_dynamic.so.4.0 00:02:37.010 LIB libspdk_accel_dsa.a 00:02:37.010 LIB libspdk_blob_bdev.a 00:02:37.010 SO libspdk_accel_iaa.so.3.0 00:02:37.010 SYMLINK libspdk_accel_ioat.so 00:02:37.010 SYMLINK libspdk_scheduler_gscheduler.so 00:02:37.010 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:37.300 SYMLINK libspdk_accel_error.so 00:02:37.300 SO libspdk_accel_dsa.so.5.0 00:02:37.300 SYMLINK libspdk_keyring_file.so 00:02:37.300 SO libspdk_blob_bdev.so.11.0 00:02:37.300 SYMLINK libspdk_scheduler_dynamic.so 00:02:37.300 SYMLINK libspdk_accel_iaa.so 00:02:37.300 SYMLINK libspdk_blob_bdev.so 00:02:37.300 SYMLINK libspdk_accel_dsa.so 00:02:37.560 LIB libspdk_sock_posix.a 00:02:37.560 SO libspdk_sock_posix.so.6.0 00:02:37.819 SYMLINK libspdk_sock_posix.so 00:02:37.819 CC module/bdev/passthru/vbdev_passthru.o 00:02:37.819 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:37.819 CC module/bdev/lvol/vbdev_lvol.o 00:02:37.819 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:37.819 CC module/bdev/gpt/gpt.o 00:02:37.819 CC module/bdev/delay/vbdev_delay.o 00:02:37.819 CC module/bdev/gpt/vbdev_gpt.o 00:02:37.819 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:37.819 CC module/blobfs/bdev/blobfs_bdev.o 00:02:37.819 CC module/bdev/malloc/bdev_malloc.o 00:02:37.819 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:37.819 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:37.819 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:37.819 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:37.819 CC module/bdev/nvme/nvme_rpc.o 00:02:37.819 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:37.819 CC module/bdev/nvme/bdev_nvme.o 00:02:37.819 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:37.819 CC module/bdev/nvme/bdev_mdns_client.o 00:02:37.819 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:37.819 CC module/bdev/nvme/vbdev_opal.o 00:02:37.819 CC module/bdev/aio/bdev_aio.o 00:02:37.819 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:37.819 CC module/bdev/ftl/bdev_ftl.o 00:02:37.819 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:37.819 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:37.819 CC module/bdev/aio/bdev_aio_rpc.o 00:02:37.819 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:37.819 CC module/bdev/error/vbdev_error.o 00:02:37.819 CC module/bdev/iscsi/bdev_iscsi.o 00:02:37.819 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:37.819 CC module/bdev/error/vbdev_error_rpc.o 00:02:37.819 CC module/bdev/null/bdev_null.o 00:02:37.819 CC module/bdev/split/vbdev_split.o 00:02:37.819 CC module/bdev/raid/bdev_raid.o 00:02:37.819 CC module/bdev/split/vbdev_split_rpc.o 00:02:37.819 CC module/bdev/raid/bdev_raid_rpc.o 00:02:37.819 CC module/bdev/null/bdev_null_rpc.o 00:02:37.819 CC module/bdev/raid/bdev_raid_sb.o 00:02:37.819 CC module/bdev/raid/raid1.o 00:02:37.819 CC module/bdev/raid/raid0.o 00:02:37.819 CC module/bdev/raid/concat.o 00:02:38.078 LIB libspdk_blobfs_bdev.a 00:02:38.078 SO libspdk_blobfs_bdev.so.6.0 00:02:38.078 LIB libspdk_bdev_split.a 00:02:38.078 LIB libspdk_bdev_passthru.a 00:02:38.078 LIB libspdk_bdev_delay.a 00:02:38.078 SO libspdk_bdev_split.so.6.0 00:02:38.078 LIB libspdk_bdev_error.a 00:02:38.078 SYMLINK libspdk_blobfs_bdev.so 00:02:38.078 LIB libspdk_bdev_ftl.a 00:02:38.078 SO libspdk_bdev_passthru.so.6.0 00:02:38.078 LIB libspdk_bdev_aio.a 00:02:38.078 LIB libspdk_bdev_iscsi.a 00:02:38.078 SO libspdk_bdev_delay.so.6.0 00:02:38.078 SO libspdk_bdev_error.so.6.0 00:02:38.078 LIB libspdk_bdev_null.a 00:02:38.078 LIB libspdk_bdev_zone_block.a 00:02:38.078 SO libspdk_bdev_aio.so.6.0 00:02:38.337 SO libspdk_bdev_ftl.so.6.0 00:02:38.337 LIB libspdk_bdev_malloc.a 00:02:38.337 LIB libspdk_bdev_gpt.a 00:02:38.337 SO libspdk_bdev_iscsi.so.6.0 00:02:38.337 SYMLINK libspdk_bdev_split.so 00:02:38.337 SO libspdk_bdev_null.so.6.0 00:02:38.337 SYMLINK libspdk_bdev_passthru.so 00:02:38.337 SO libspdk_bdev_zone_block.so.6.0 00:02:38.337 SYMLINK libspdk_bdev_delay.so 00:02:38.337 SO libspdk_bdev_gpt.so.6.0 00:02:38.337 SO libspdk_bdev_malloc.so.6.0 00:02:38.337 SYMLINK libspdk_bdev_error.so 00:02:38.337 SYMLINK libspdk_bdev_aio.so 00:02:38.337 SYMLINK libspdk_bdev_ftl.so 00:02:38.337 SYMLINK libspdk_bdev_iscsi.so 00:02:38.337 SYMLINK libspdk_bdev_null.so 00:02:38.337 SYMLINK libspdk_bdev_zone_block.so 00:02:38.337 SYMLINK libspdk_bdev_gpt.so 00:02:38.337 SYMLINK libspdk_bdev_malloc.so 00:02:38.337 LIB libspdk_bdev_virtio.a 00:02:38.337 SO libspdk_bdev_virtio.so.6.0 00:02:38.597 LIB libspdk_bdev_lvol.a 00:02:38.597 SYMLINK libspdk_bdev_virtio.so 00:02:38.597 SO libspdk_bdev_lvol.so.6.0 00:02:38.597 SYMLINK libspdk_bdev_lvol.so 00:02:38.857 LIB libspdk_bdev_raid.a 00:02:38.857 SO libspdk_bdev_raid.so.6.0 00:02:38.857 SYMLINK libspdk_bdev_raid.so 00:02:40.236 LIB libspdk_bdev_nvme.a 00:02:40.237 SO libspdk_bdev_nvme.so.7.0 00:02:40.497 SYMLINK libspdk_bdev_nvme.so 00:02:41.066 CC module/event/subsystems/vmd/vmd.o 00:02:41.066 CC module/event/subsystems/sock/sock.o 00:02:41.066 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:41.066 CC module/event/subsystems/scheduler/scheduler.o 00:02:41.066 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:41.066 CC module/event/subsystems/iobuf/iobuf.o 00:02:41.066 CC module/event/subsystems/keyring/keyring.o 00:02:41.066 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:41.326 LIB libspdk_event_scheduler.a 00:02:41.326 LIB libspdk_event_sock.a 00:02:41.326 LIB libspdk_event_vmd.a 00:02:41.326 LIB libspdk_event_keyring.a 00:02:41.326 LIB libspdk_event_vhost_blk.a 00:02:41.326 SO libspdk_event_scheduler.so.4.0 00:02:41.326 SO libspdk_event_sock.so.5.0 00:02:41.326 LIB libspdk_event_iobuf.a 00:02:41.326 SO libspdk_event_vmd.so.6.0 00:02:41.326 SO libspdk_event_keyring.so.1.0 00:02:41.326 SO libspdk_event_vhost_blk.so.3.0 00:02:41.326 SO libspdk_event_iobuf.so.3.0 00:02:41.326 SYMLINK libspdk_event_scheduler.so 00:02:41.326 SYMLINK libspdk_event_sock.so 00:02:41.326 SYMLINK libspdk_event_vhost_blk.so 00:02:41.326 SYMLINK libspdk_event_keyring.so 00:02:41.326 SYMLINK libspdk_event_vmd.so 00:02:41.585 SYMLINK libspdk_event_iobuf.so 00:02:41.845 CC module/event/subsystems/accel/accel.o 00:02:42.104 LIB libspdk_event_accel.a 00:02:42.104 SO libspdk_event_accel.so.6.0 00:02:42.104 SYMLINK libspdk_event_accel.so 00:02:42.365 CC module/event/subsystems/bdev/bdev.o 00:02:42.623 LIB libspdk_event_bdev.a 00:02:42.623 SO libspdk_event_bdev.so.6.0 00:02:42.883 SYMLINK libspdk_event_bdev.so 00:02:43.142 CC module/event/subsystems/scsi/scsi.o 00:02:43.142 CC module/event/subsystems/ublk/ublk.o 00:02:43.142 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:43.142 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:43.142 CC module/event/subsystems/nbd/nbd.o 00:02:43.401 LIB libspdk_event_nbd.a 00:02:43.401 LIB libspdk_event_ublk.a 00:02:43.401 LIB libspdk_event_scsi.a 00:02:43.401 SO libspdk_event_nbd.so.6.0 00:02:43.401 SO libspdk_event_ublk.so.3.0 00:02:43.401 SO libspdk_event_scsi.so.6.0 00:02:43.401 LIB libspdk_event_nvmf.a 00:02:43.401 SYMLINK libspdk_event_nbd.so 00:02:43.401 SYMLINK libspdk_event_scsi.so 00:02:43.401 SYMLINK libspdk_event_ublk.so 00:02:43.401 SO libspdk_event_nvmf.so.6.0 00:02:43.661 SYMLINK libspdk_event_nvmf.so 00:02:43.920 CC module/event/subsystems/iscsi/iscsi.o 00:02:43.920 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:43.920 LIB libspdk_event_vhost_scsi.a 00:02:44.180 LIB libspdk_event_iscsi.a 00:02:44.180 SO libspdk_event_vhost_scsi.so.3.0 00:02:44.180 SO libspdk_event_iscsi.so.6.0 00:02:44.180 SYMLINK libspdk_event_vhost_scsi.so 00:02:44.180 SYMLINK libspdk_event_iscsi.so 00:02:44.440 SO libspdk.so.6.0 00:02:44.440 SYMLINK libspdk.so 00:02:44.699 CC app/spdk_nvme_identify/identify.o 00:02:44.699 CC app/trace_record/trace_record.o 00:02:44.699 CC app/spdk_nvme_perf/perf.o 00:02:44.699 CXX app/trace/trace.o 00:02:44.699 CC app/spdk_lspci/spdk_lspci.o 00:02:44.699 CC app/spdk_nvme_discover/discovery_aer.o 00:02:44.699 TEST_HEADER include/spdk/accel.h 00:02:44.699 TEST_HEADER include/spdk/accel_module.h 00:02:44.699 CC test/rpc_client/rpc_client_test.o 00:02:44.699 TEST_HEADER include/spdk/assert.h 00:02:44.699 CC app/spdk_top/spdk_top.o 00:02:44.699 TEST_HEADER include/spdk/barrier.h 00:02:44.699 TEST_HEADER include/spdk/base64.h 00:02:44.699 TEST_HEADER include/spdk/bdev.h 00:02:44.699 TEST_HEADER include/spdk/bdev_module.h 00:02:44.699 TEST_HEADER include/spdk/bdev_zone.h 00:02:44.699 TEST_HEADER include/spdk/bit_array.h 00:02:44.699 TEST_HEADER include/spdk/bit_pool.h 00:02:44.699 TEST_HEADER include/spdk/blob_bdev.h 00:02:44.699 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:44.699 TEST_HEADER include/spdk/blobfs.h 00:02:44.699 TEST_HEADER include/spdk/blob.h 00:02:44.699 TEST_HEADER include/spdk/conf.h 00:02:44.963 TEST_HEADER include/spdk/config.h 00:02:44.963 TEST_HEADER include/spdk/cpuset.h 00:02:44.963 CC app/spdk_dd/spdk_dd.o 00:02:44.963 TEST_HEADER include/spdk/crc16.h 00:02:44.963 TEST_HEADER include/spdk/crc32.h 00:02:44.963 TEST_HEADER include/spdk/crc64.h 00:02:44.963 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:44.963 TEST_HEADER include/spdk/dif.h 00:02:44.963 TEST_HEADER include/spdk/dma.h 00:02:44.963 CC app/vhost/vhost.o 00:02:44.963 CC app/nvmf_tgt/nvmf_main.o 00:02:44.963 TEST_HEADER include/spdk/endian.h 00:02:44.963 TEST_HEADER include/spdk/env_dpdk.h 00:02:44.963 CC app/iscsi_tgt/iscsi_tgt.o 00:02:44.963 TEST_HEADER include/spdk/env.h 00:02:44.963 TEST_HEADER include/spdk/event.h 00:02:44.963 TEST_HEADER include/spdk/fd_group.h 00:02:44.963 TEST_HEADER include/spdk/fd.h 00:02:44.963 TEST_HEADER include/spdk/file.h 00:02:44.963 TEST_HEADER include/spdk/ftl.h 00:02:44.963 TEST_HEADER include/spdk/gpt_spec.h 00:02:44.963 TEST_HEADER include/spdk/hexlify.h 00:02:44.963 TEST_HEADER include/spdk/histogram_data.h 00:02:44.963 TEST_HEADER include/spdk/idxd.h 00:02:44.963 CC app/spdk_tgt/spdk_tgt.o 00:02:44.963 TEST_HEADER include/spdk/idxd_spec.h 00:02:44.963 CC examples/ioat/perf/perf.o 00:02:44.963 TEST_HEADER include/spdk/init.h 00:02:44.963 TEST_HEADER include/spdk/ioat.h 00:02:44.963 CC test/app/histogram_perf/histogram_perf.o 00:02:44.963 TEST_HEADER include/spdk/ioat_spec.h 00:02:44.963 TEST_HEADER include/spdk/iscsi_spec.h 00:02:44.963 CC test/nvme/reset/reset.o 00:02:44.963 CC test/app/jsoncat/jsoncat.o 00:02:44.963 TEST_HEADER include/spdk/json.h 00:02:44.963 CC test/nvme/aer/aer.o 00:02:44.963 CC examples/ioat/verify/verify.o 00:02:44.963 CC examples/util/zipf/zipf.o 00:02:44.963 TEST_HEADER include/spdk/jsonrpc.h 00:02:44.963 CC test/nvme/e2edp/nvme_dp.o 00:02:44.963 CC app/fio/nvme/fio_plugin.o 00:02:44.963 CC test/event/reactor_perf/reactor_perf.o 00:02:44.963 CC test/thread/poller_perf/poller_perf.o 00:02:44.963 CC test/nvme/startup/startup.o 00:02:44.963 CC test/event/reactor/reactor.o 00:02:44.963 CC examples/accel/perf/accel_perf.o 00:02:44.963 TEST_HEADER include/spdk/keyring.h 00:02:44.963 CC examples/nvme/abort/abort.o 00:02:44.963 CC test/nvme/reserve/reserve.o 00:02:44.963 CC test/app/stub/stub.o 00:02:44.963 CC test/nvme/overhead/overhead.o 00:02:44.963 CC test/nvme/sgl/sgl.o 00:02:44.963 TEST_HEADER include/spdk/keyring_module.h 00:02:44.963 CC examples/nvme/hotplug/hotplug.o 00:02:44.963 TEST_HEADER include/spdk/likely.h 00:02:44.963 CC examples/idxd/perf/perf.o 00:02:44.963 TEST_HEADER include/spdk/log.h 00:02:44.963 CC test/event/event_perf/event_perf.o 00:02:44.963 CC test/env/vtophys/vtophys.o 00:02:44.963 TEST_HEADER include/spdk/lvol.h 00:02:44.963 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:44.963 CC test/nvme/connect_stress/connect_stress.o 00:02:44.963 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:44.963 TEST_HEADER include/spdk/memory.h 00:02:44.963 CC test/nvme/err_injection/err_injection.o 00:02:44.963 TEST_HEADER include/spdk/mmio.h 00:02:44.963 CC examples/vmd/lsvmd/lsvmd.o 00:02:44.963 CC test/nvme/compliance/nvme_compliance.o 00:02:44.963 CC examples/nvme/reconnect/reconnect.o 00:02:44.963 CC test/nvme/boot_partition/boot_partition.o 00:02:44.963 TEST_HEADER include/spdk/nbd.h 00:02:44.963 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:44.963 CC examples/nvme/arbitration/arbitration.o 00:02:44.963 CC examples/sock/hello_world/hello_sock.o 00:02:44.963 CC test/nvme/fused_ordering/fused_ordering.o 00:02:44.963 CC examples/nvme/hello_world/hello_world.o 00:02:44.963 TEST_HEADER include/spdk/notify.h 00:02:44.963 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:44.963 CC test/nvme/simple_copy/simple_copy.o 00:02:44.963 CC test/env/memory/memory_ut.o 00:02:44.963 CC examples/blob/hello_world/hello_blob.o 00:02:44.963 CC examples/vmd/led/led.o 00:02:44.963 TEST_HEADER include/spdk/nvme.h 00:02:44.963 CC test/event/app_repeat/app_repeat.o 00:02:44.963 CC examples/blob/cli/blobcli.o 00:02:44.963 TEST_HEADER include/spdk/nvme_intel.h 00:02:44.963 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:44.963 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:44.963 TEST_HEADER include/spdk/nvme_spec.h 00:02:44.963 TEST_HEADER include/spdk/nvme_zns.h 00:02:44.963 CC examples/bdev/hello_world/hello_bdev.o 00:02:44.963 CC test/bdev/bdevio/bdevio.o 00:02:44.963 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:44.963 CC test/accel/dif/dif.o 00:02:44.963 CC examples/bdev/bdevperf/bdevperf.o 00:02:44.963 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:44.963 CC examples/thread/thread/thread_ex.o 00:02:44.963 CC test/blobfs/mkfs/mkfs.o 00:02:44.963 TEST_HEADER include/spdk/nvmf.h 00:02:44.963 TEST_HEADER include/spdk/nvmf_spec.h 00:02:44.963 TEST_HEADER include/spdk/nvmf_transport.h 00:02:44.963 CC test/event/scheduler/scheduler.o 00:02:44.963 CC test/dma/test_dma/test_dma.o 00:02:44.963 TEST_HEADER include/spdk/opal.h 00:02:44.963 TEST_HEADER include/spdk/opal_spec.h 00:02:44.963 CC app/fio/bdev/fio_plugin.o 00:02:44.963 CC test/app/bdev_svc/bdev_svc.o 00:02:45.222 TEST_HEADER include/spdk/pci_ids.h 00:02:45.222 TEST_HEADER include/spdk/pipe.h 00:02:45.222 TEST_HEADER include/spdk/queue.h 00:02:45.222 LINK spdk_lspci 00:02:45.222 TEST_HEADER include/spdk/reduce.h 00:02:45.222 TEST_HEADER include/spdk/rpc.h 00:02:45.222 TEST_HEADER include/spdk/scheduler.h 00:02:45.222 CC examples/nvmf/nvmf/nvmf.o 00:02:45.222 TEST_HEADER include/spdk/scsi.h 00:02:45.222 TEST_HEADER include/spdk/scsi_spec.h 00:02:45.222 TEST_HEADER include/spdk/sock.h 00:02:45.222 TEST_HEADER include/spdk/stdinc.h 00:02:45.222 TEST_HEADER include/spdk/string.h 00:02:45.222 TEST_HEADER include/spdk/thread.h 00:02:45.223 CC test/lvol/esnap/esnap.o 00:02:45.223 TEST_HEADER include/spdk/trace.h 00:02:45.223 TEST_HEADER include/spdk/trace_parser.h 00:02:45.223 TEST_HEADER include/spdk/tree.h 00:02:45.223 LINK spdk_nvme_discover 00:02:45.223 LINK rpc_client_test 00:02:45.223 TEST_HEADER include/spdk/ublk.h 00:02:45.223 TEST_HEADER include/spdk/util.h 00:02:45.223 CC test/env/mem_callbacks/mem_callbacks.o 00:02:45.223 TEST_HEADER include/spdk/uuid.h 00:02:45.223 TEST_HEADER include/spdk/version.h 00:02:45.223 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:45.223 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:45.223 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:45.223 TEST_HEADER include/spdk/vhost.h 00:02:45.223 TEST_HEADER include/spdk/vmd.h 00:02:45.223 TEST_HEADER include/spdk/xor.h 00:02:45.223 TEST_HEADER include/spdk/zipf.h 00:02:45.223 CXX test/cpp_headers/accel.o 00:02:45.223 LINK spdk_trace_record 00:02:45.223 LINK vhost 00:02:45.223 LINK histogram_perf 00:02:45.223 LINK reactor_perf 00:02:45.223 LINK poller_perf 00:02:45.223 LINK reactor 00:02:45.223 LINK interrupt_tgt 00:02:45.223 LINK jsoncat 00:02:45.223 LINK event_perf 00:02:45.223 LINK zipf 00:02:45.223 LINK iscsi_tgt 00:02:45.483 LINK lsvmd 00:02:45.483 LINK nvmf_tgt 00:02:45.483 LINK startup 00:02:45.483 LINK stub 00:02:45.483 LINK env_dpdk_post_init 00:02:45.483 LINK vtophys 00:02:45.483 LINK boot_partition 00:02:45.483 LINK reserve 00:02:45.483 LINK led 00:02:45.483 LINK app_repeat 00:02:45.483 LINK verify 00:02:45.483 LINK ioat_perf 00:02:45.483 LINK connect_stress 00:02:45.483 LINK err_injection 00:02:45.483 LINK bdev_svc 00:02:45.483 LINK spdk_tgt 00:02:45.483 LINK cmb_copy 00:02:45.483 LINK doorbell_aers 00:02:45.483 LINK hotplug 00:02:45.483 LINK fused_ordering 00:02:45.483 LINK overhead 00:02:45.483 LINK sgl 00:02:45.483 LINK nvme_dp 00:02:45.483 LINK hello_world 00:02:45.483 LINK hello_bdev 00:02:45.483 LINK scheduler 00:02:45.483 LINK mkfs 00:02:45.483 LINK hello_sock 00:02:45.483 LINK hello_blob 00:02:45.483 LINK reset 00:02:45.483 LINK spdk_trace 00:02:45.483 LINK simple_copy 00:02:45.749 LINK aer 00:02:45.749 CXX test/cpp_headers/accel_module.o 00:02:45.749 LINK thread 00:02:45.749 LINK nvme_compliance 00:02:45.749 CC test/env/pci/pci_ut.o 00:02:45.749 CXX test/cpp_headers/assert.o 00:02:45.749 LINK spdk_dd 00:02:45.749 LINK idxd_perf 00:02:45.749 LINK arbitration 00:02:45.749 CXX test/cpp_headers/barrier.o 00:02:45.749 CXX test/cpp_headers/base64.o 00:02:45.749 CXX test/cpp_headers/bdev.o 00:02:45.749 CXX test/cpp_headers/bdev_module.o 00:02:45.749 CXX test/cpp_headers/bdev_zone.o 00:02:45.749 LINK reconnect 00:02:45.749 LINK nvmf 00:02:45.749 LINK accel_perf 00:02:45.749 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:45.749 CC test/nvme/fdp/fdp.o 00:02:45.749 CXX test/cpp_headers/bit_array.o 00:02:45.749 LINK bdevio 00:02:45.749 CXX test/cpp_headers/bit_pool.o 00:02:45.749 LINK abort 00:02:45.749 CXX test/cpp_headers/blob_bdev.o 00:02:45.749 CXX test/cpp_headers/blobfs_bdev.o 00:02:45.749 LINK dif 00:02:45.749 CXX test/cpp_headers/blobfs.o 00:02:45.749 CXX test/cpp_headers/blob.o 00:02:45.749 CXX test/cpp_headers/conf.o 00:02:45.749 CC test/nvme/cuse/cuse.o 00:02:45.749 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:45.749 CXX test/cpp_headers/config.o 00:02:45.749 CXX test/cpp_headers/cpuset.o 00:02:45.749 CXX test/cpp_headers/crc16.o 00:02:45.749 CXX test/cpp_headers/crc32.o 00:02:45.749 CXX test/cpp_headers/crc64.o 00:02:45.749 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:45.749 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:45.749 CXX test/cpp_headers/dif.o 00:02:45.749 CXX test/cpp_headers/dma.o 00:02:45.749 CXX test/cpp_headers/endian.o 00:02:45.749 CXX test/cpp_headers/env_dpdk.o 00:02:45.749 CXX test/cpp_headers/env.o 00:02:45.749 CXX test/cpp_headers/event.o 00:02:46.011 LINK test_dma 00:02:46.011 CXX test/cpp_headers/fd_group.o 00:02:46.011 CXX test/cpp_headers/fd.o 00:02:46.011 CXX test/cpp_headers/file.o 00:02:46.011 CXX test/cpp_headers/ftl.o 00:02:46.011 LINK spdk_nvme 00:02:46.011 CXX test/cpp_headers/gpt_spec.o 00:02:46.011 CXX test/cpp_headers/hexlify.o 00:02:46.011 CXX test/cpp_headers/histogram_data.o 00:02:46.011 CXX test/cpp_headers/idxd.o 00:02:46.011 CXX test/cpp_headers/idxd_spec.o 00:02:46.011 CXX test/cpp_headers/init.o 00:02:46.011 LINK nvme_manage 00:02:46.011 CXX test/cpp_headers/ioat.o 00:02:46.011 CXX test/cpp_headers/ioat_spec.o 00:02:46.011 CXX test/cpp_headers/iscsi_spec.o 00:02:46.011 CXX test/cpp_headers/json.o 00:02:46.011 CXX test/cpp_headers/jsonrpc.o 00:02:46.011 CXX test/cpp_headers/keyring.o 00:02:46.011 CXX test/cpp_headers/keyring_module.o 00:02:46.011 CXX test/cpp_headers/likely.o 00:02:46.011 CXX test/cpp_headers/lvol.o 00:02:46.011 CXX test/cpp_headers/log.o 00:02:46.011 CXX test/cpp_headers/memory.o 00:02:46.011 LINK blobcli 00:02:46.011 CXX test/cpp_headers/mmio.o 00:02:46.011 LINK nvme_fuzz 00:02:46.011 CXX test/cpp_headers/nbd.o 00:02:46.011 CXX test/cpp_headers/notify.o 00:02:46.011 CXX test/cpp_headers/nvme.o 00:02:46.011 CXX test/cpp_headers/nvme_intel.o 00:02:46.011 CXX test/cpp_headers/nvme_ocssd.o 00:02:46.011 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:46.011 CXX test/cpp_headers/nvme_spec.o 00:02:46.276 CXX test/cpp_headers/nvme_zns.o 00:02:46.276 CXX test/cpp_headers/nvmf_cmd.o 00:02:46.276 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:46.276 CXX test/cpp_headers/nvmf.o 00:02:46.276 CXX test/cpp_headers/nvmf_spec.o 00:02:46.276 LINK spdk_bdev 00:02:46.276 CXX test/cpp_headers/nvmf_transport.o 00:02:46.276 CXX test/cpp_headers/opal.o 00:02:46.276 CXX test/cpp_headers/opal_spec.o 00:02:46.276 CXX test/cpp_headers/pci_ids.o 00:02:46.276 CXX test/cpp_headers/pipe.o 00:02:46.276 CXX test/cpp_headers/queue.o 00:02:46.276 CXX test/cpp_headers/reduce.o 00:02:46.276 LINK mem_callbacks 00:02:46.276 CXX test/cpp_headers/rpc.o 00:02:46.276 CXX test/cpp_headers/scsi.o 00:02:46.276 CXX test/cpp_headers/scheduler.o 00:02:46.276 LINK pmr_persistence 00:02:46.276 CXX test/cpp_headers/scsi_spec.o 00:02:46.276 CXX test/cpp_headers/sock.o 00:02:46.276 CXX test/cpp_headers/stdinc.o 00:02:46.276 CXX test/cpp_headers/string.o 00:02:46.276 CXX test/cpp_headers/thread.o 00:02:46.276 CXX test/cpp_headers/trace.o 00:02:46.276 CXX test/cpp_headers/trace_parser.o 00:02:46.276 CXX test/cpp_headers/tree.o 00:02:46.276 CXX test/cpp_headers/util.o 00:02:46.276 CXX test/cpp_headers/ublk.o 00:02:46.276 LINK pci_ut 00:02:46.536 CXX test/cpp_headers/uuid.o 00:02:46.536 LINK spdk_nvme_perf 00:02:46.536 CXX test/cpp_headers/version.o 00:02:46.536 CXX test/cpp_headers/vfio_user_pci.o 00:02:46.536 CXX test/cpp_headers/vfio_user_spec.o 00:02:46.536 LINK fdp 00:02:46.536 LINK spdk_nvme_identify 00:02:46.536 CXX test/cpp_headers/vhost.o 00:02:46.536 CXX test/cpp_headers/vmd.o 00:02:46.536 CXX test/cpp_headers/xor.o 00:02:46.536 CXX test/cpp_headers/zipf.o 00:02:46.536 LINK bdevperf 00:02:46.536 LINK spdk_top 00:02:46.536 LINK memory_ut 00:02:46.795 LINK vhost_fuzz 00:02:47.362 LINK cuse 00:02:47.930 LINK iscsi_fuzz 00:02:51.218 LINK esnap 00:02:51.218 00:02:51.218 real 0m43.938s 00:02:51.218 user 6m43.564s 00:02:51.218 sys 2m38.388s 00:02:51.218 02:28:54 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:02:51.218 02:28:54 make -- common/autotest_common.sh@10 -- $ set +x 00:02:51.218 ************************************ 00:02:51.218 END TEST make 00:02:51.218 ************************************ 00:02:51.218 02:28:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:51.218 02:28:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:51.218 02:28:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:51.218 02:28:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.218 02:28:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:51.218 02:28:54 -- pm/common@44 -- $ pid=541589 00:02:51.218 02:28:54 -- pm/common@50 -- $ kill -TERM 541589 00:02:51.218 02:28:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.218 02:28:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:51.218 02:28:54 -- pm/common@44 -- $ pid=541590 00:02:51.218 02:28:54 -- pm/common@50 -- $ kill -TERM 541590 00:02:51.218 02:28:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.218 02:28:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:51.218 02:28:54 -- pm/common@44 -- $ pid=541592 00:02:51.218 02:28:54 -- pm/common@50 -- $ kill -TERM 541592 00:02:51.218 02:28:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.219 02:28:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:51.219 02:28:54 -- pm/common@44 -- $ pid=541622 00:02:51.219 02:28:54 -- pm/common@50 -- $ sudo -E kill -TERM 541622 00:02:51.478 02:28:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:51.478 02:28:54 -- nvmf/common.sh@7 -- # uname -s 00:02:51.478 02:28:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:51.478 02:28:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:51.478 02:28:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:51.478 02:28:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:51.478 02:28:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:51.478 02:28:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:51.478 02:28:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:51.479 02:28:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:51.479 02:28:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:51.479 02:28:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:51.479 02:28:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:02:51.479 02:28:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:02:51.479 02:28:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:51.479 02:28:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:51.479 02:28:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:51.479 02:28:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:51.479 02:28:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:51.479 02:28:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:51.479 02:28:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:51.479 02:28:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:51.479 02:28:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.479 02:28:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.479 02:28:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.479 02:28:54 -- paths/export.sh@5 -- # export PATH 00:02:51.479 02:28:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.479 02:28:54 -- nvmf/common.sh@47 -- # : 0 00:02:51.479 02:28:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:51.479 02:28:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:51.479 02:28:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:51.479 02:28:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:51.479 02:28:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:51.479 02:28:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:51.479 02:28:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:51.479 02:28:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:51.479 02:28:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:51.479 02:28:54 -- spdk/autotest.sh@32 -- # uname -s 00:02:51.479 02:28:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:51.479 02:28:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:51.479 02:28:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:51.479 02:28:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:51.479 02:28:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:51.479 02:28:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:51.479 02:28:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:51.479 02:28:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:51.479 02:28:54 -- spdk/autotest.sh@48 -- # udevadm_pid=614642 00:02:51.479 02:28:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:51.479 02:28:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:51.479 02:28:54 -- pm/common@17 -- # local monitor 00:02:51.479 02:28:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.479 02:28:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.479 02:28:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.479 02:28:54 -- pm/common@21 -- # date +%s 00:02:51.479 02:28:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.479 02:28:54 -- pm/common@21 -- # date +%s 00:02:51.479 02:28:54 -- pm/common@25 -- # sleep 1 00:02:51.479 02:28:54 -- pm/common@21 -- # date +%s 00:02:51.479 02:28:54 -- pm/common@21 -- # date +%s 00:02:51.479 02:28:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715732934 00:02:51.479 02:28:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715732934 00:02:51.479 02:28:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715732934 00:02:51.479 02:28:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715732934 00:02:51.479 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715732934_collect-vmstat.pm.log 00:02:51.479 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715732934_collect-cpu-load.pm.log 00:02:51.479 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715732934_collect-cpu-temp.pm.log 00:02:51.479 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715732934_collect-bmc-pm.bmc.pm.log 00:02:52.418 02:28:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:52.418 02:28:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:52.418 02:28:55 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:52.418 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:02:52.418 02:28:55 -- spdk/autotest.sh@59 -- # create_test_list 00:02:52.418 02:28:55 -- common/autotest_common.sh@745 -- # xtrace_disable 00:02:52.418 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:02:52.418 02:28:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:52.418 02:28:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:52.418 02:28:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:52.418 02:28:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:52.418 02:28:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:52.418 02:28:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:52.418 02:28:55 -- common/autotest_common.sh@1452 -- # uname 00:02:52.419 02:28:55 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:02:52.419 02:28:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:52.419 02:28:55 -- common/autotest_common.sh@1472 -- # uname 00:02:52.419 02:28:55 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:02:52.419 02:28:55 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:52.419 02:28:55 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:52.419 02:28:55 -- spdk/autotest.sh@72 -- # hash lcov 00:02:52.419 02:28:55 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:52.419 02:28:55 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:52.419 --rc lcov_branch_coverage=1 00:02:52.419 --rc lcov_function_coverage=1 00:02:52.419 --rc genhtml_branch_coverage=1 00:02:52.419 --rc genhtml_function_coverage=1 00:02:52.419 --rc genhtml_legend=1 00:02:52.419 --rc geninfo_all_blocks=1 00:02:52.419 ' 00:02:52.419 02:28:55 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:52.419 --rc lcov_branch_coverage=1 00:02:52.419 --rc lcov_function_coverage=1 00:02:52.419 --rc genhtml_branch_coverage=1 00:02:52.419 --rc genhtml_function_coverage=1 00:02:52.419 --rc genhtml_legend=1 00:02:52.419 --rc geninfo_all_blocks=1 00:02:52.419 ' 00:02:52.419 02:28:55 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:52.419 --rc lcov_branch_coverage=1 00:02:52.419 --rc lcov_function_coverage=1 00:02:52.419 --rc genhtml_branch_coverage=1 00:02:52.419 --rc genhtml_function_coverage=1 00:02:52.419 --rc genhtml_legend=1 00:02:52.419 --rc geninfo_all_blocks=1 00:02:52.419 --no-external' 00:02:52.419 02:28:55 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:52.419 --rc lcov_branch_coverage=1 00:02:52.419 --rc lcov_function_coverage=1 00:02:52.419 --rc genhtml_branch_coverage=1 00:02:52.419 --rc genhtml_function_coverage=1 00:02:52.419 --rc genhtml_legend=1 00:02:52.419 --rc geninfo_all_blocks=1 00:02:52.419 --no-external' 00:02:52.419 02:28:55 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:52.681 lcov: LCOV version 1.14 00:02:52.681 02:28:55 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:04.897 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:04.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:06.858 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:06.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:06.858 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:06.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:06.858 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:06.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:24.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:24.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:24.955 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:24.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:25.893 02:29:29 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:25.893 02:29:29 -- common/autotest_common.sh@721 -- # xtrace_disable 00:03:25.893 02:29:29 -- common/autotest_common.sh@10 -- # set +x 00:03:25.893 02:29:29 -- spdk/autotest.sh@91 -- # rm -f 00:03:25.893 02:29:29 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.184 0000:5e:00.0 (144d a80a): Already using the nvme driver 00:03:29.184 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:29.184 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:29.184 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:29.184 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:29.184 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:29.184 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:29.184 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:29.184 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:29.184 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:29.443 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:29.443 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:29.443 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:29.443 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:29.443 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:29.443 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:29.443 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:29.443 02:29:32 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:29.443 02:29:32 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:29.443 02:29:32 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:29.444 02:29:32 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:29.444 02:29:32 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:29.444 02:29:32 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:29.444 02:29:32 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:29.444 02:29:32 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:29.444 02:29:32 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:29.444 02:29:32 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:29.444 02:29:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:29.444 02:29:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:29.444 02:29:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:29.444 02:29:32 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:29.703 02:29:32 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:29.703 No valid GPT data, bailing 00:03:29.703 02:29:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:29.703 02:29:32 -- scripts/common.sh@391 -- # pt= 00:03:29.703 02:29:32 -- scripts/common.sh@392 -- # return 1 00:03:29.703 02:29:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:29.703 1+0 records in 00:03:29.703 1+0 records out 00:03:29.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00571497 s, 183 MB/s 00:03:29.703 02:29:32 -- spdk/autotest.sh@118 -- # sync 00:03:29.703 02:29:32 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:29.703 02:29:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:29.703 02:29:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:34.981 02:29:37 -- spdk/autotest.sh@124 -- # uname -s 00:03:34.981 02:29:37 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:34.981 02:29:37 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:34.981 02:29:37 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:34.981 02:29:37 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:34.981 02:29:37 -- common/autotest_common.sh@10 -- # set +x 00:03:34.981 ************************************ 00:03:34.981 START TEST setup.sh 00:03:34.981 ************************************ 00:03:34.981 02:29:38 setup.sh -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:34.981 * Looking for test storage... 00:03:34.981 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:34.981 02:29:38 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:34.982 02:29:38 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:34.982 02:29:38 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:34.982 02:29:38 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:34.982 02:29:38 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:34.982 02:29:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:34.982 ************************************ 00:03:34.982 START TEST acl 00:03:34.982 ************************************ 00:03:34.982 02:29:38 setup.sh.acl -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:35.241 * Looking for test storage... 00:03:35.241 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:35.241 02:29:38 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:35.241 02:29:38 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:35.241 02:29:38 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:35.241 02:29:38 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:35.241 02:29:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:35.241 02:29:38 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:35.241 02:29:38 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:35.241 02:29:38 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.241 02:29:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:35.241 02:29:38 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:35.241 02:29:38 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:35.241 02:29:38 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:35.241 02:29:38 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:35.241 02:29:38 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:35.241 02:29:38 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.241 02:29:38 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.437 02:29:42 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:39.437 02:29:42 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:39.437 02:29:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.437 02:29:42 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:39.437 02:29:42 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.437 02:29:42 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:41.975 Hugepages 00:03:41.975 node hugesize free / total 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.975 00:03:41.975 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.975 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:41.976 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:42.235 02:29:45 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:42.235 02:29:45 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:42.235 02:29:45 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:42.235 02:29:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.235 ************************************ 00:03:42.235 START TEST denied 00:03:42.235 ************************************ 00:03:42.235 02:29:45 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:03:42.235 02:29:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:42.235 02:29:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:42.235 02:29:45 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:42.235 02:29:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.235 02:29:45 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:45.525 0000:5e:00.0 (144d a80a): Skipping denied controller at 0000:5e:00.0 00:03:45.525 02:29:48 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:45.525 02:29:48 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:45.525 02:29:48 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:45.525 02:29:48 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:45.525 02:29:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:45.525 02:29:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:45.525 02:29:48 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:45.525 02:29:48 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:45.525 02:29:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.525 02:29:48 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.802 00:03:50.802 real 0m7.871s 00:03:50.802 user 0m2.286s 00:03:50.802 sys 0m4.768s 00:03:50.802 02:29:53 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:50.802 02:29:53 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:50.802 ************************************ 00:03:50.802 END TEST denied 00:03:50.802 ************************************ 00:03:50.802 02:29:53 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:50.802 02:29:53 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:50.802 02:29:53 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:50.802 02:29:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:50.802 ************************************ 00:03:50.802 START TEST allowed 00:03:50.802 ************************************ 00:03:50.802 02:29:53 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:03:50.802 02:29:53 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:50.802 02:29:53 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:50.802 02:29:53 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:50.802 02:29:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.802 02:29:53 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:56.143 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:03:56.143 02:29:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:56.143 02:29:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:56.143 02:29:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:56.143 02:29:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.143 02:29:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.436 00:03:59.436 real 0m8.923s 00:03:59.436 user 0m2.506s 00:03:59.436 sys 0m4.824s 00:03:59.436 02:30:02 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:59.436 02:30:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:59.436 ************************************ 00:03:59.436 END TEST allowed 00:03:59.436 ************************************ 00:03:59.436 00:03:59.436 real 0m24.182s 00:03:59.436 user 0m7.526s 00:03:59.436 sys 0m14.518s 00:03:59.436 02:30:02 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:59.436 02:30:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:59.436 ************************************ 00:03:59.436 END TEST acl 00:03:59.436 ************************************ 00:03:59.436 02:30:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.436 02:30:02 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:59.436 02:30:02 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:59.436 02:30:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.436 ************************************ 00:03:59.436 START TEST hugepages 00:03:59.436 ************************************ 00:03:59.436 02:30:02 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.436 * Looking for test storage... 00:03:59.436 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 42871476 kB' 'MemAvailable: 46472152 kB' 'Buffers: 11760 kB' 'Cached: 10709416 kB' 'SwapCached: 0 kB' 'Active: 7744608 kB' 'Inactive: 3430820 kB' 'Active(anon): 7180584 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3430820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457744 kB' 'Mapped: 152408 kB' 'Shmem: 6726332 kB' 'KReclaimable: 205548 kB' 'Slab: 574056 kB' 'SReclaimable: 205548 kB' 'SUnreclaim: 368508 kB' 'KernelStack: 16560 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439264 kB' 'Committed_AS: 8433680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199520 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.436 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.437 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.438 02:30:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:59.438 02:30:02 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:59.438 02:30:02 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:59.438 02:30:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.438 ************************************ 00:03:59.438 START TEST default_setup 00:03:59.438 ************************************ 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.438 02:30:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:02.728 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:02.728 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:02.728 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:02.728 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:02.728 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:02.728 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:02.728 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:02.728 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:04:02.728 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:02.988 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:02.988 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:02.988 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:02.988 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:02.988 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:02.989 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:02.989 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:02.989 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 44987332 kB' 'MemAvailable: 48587736 kB' 'Buffers: 11760 kB' 'Cached: 10709816 kB' 'SwapCached: 0 kB' 'Active: 7774980 kB' 'Inactive: 3430820 kB' 'Active(anon): 7210956 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3430820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487120 kB' 'Mapped: 152204 kB' 'Shmem: 6726732 kB' 'KReclaimable: 205004 kB' 'Slab: 572544 kB' 'SReclaimable: 205004 kB' 'SUnreclaim: 367540 kB' 'KernelStack: 16512 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8466028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199588 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.989 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:02.990 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 44987440 kB' 'MemAvailable: 48587844 kB' 'Buffers: 11760 kB' 'Cached: 10710324 kB' 'SwapCached: 0 kB' 'Active: 7773504 kB' 'Inactive: 3430820 kB' 'Active(anon): 7209480 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3430820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485732 kB' 'Mapped: 152588 kB' 'Shmem: 6726736 kB' 'KReclaimable: 205004 kB' 'Slab: 572512 kB' 'SReclaimable: 205004 kB' 'SUnreclaim: 367508 kB' 'KernelStack: 16352 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8465868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199508 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 44986180 kB' 'MemAvailable: 48586584 kB' 'Buffers: 11760 kB' 'Cached: 10710324 kB' 'SwapCached: 0 kB' 'Active: 7774232 kB' 'Inactive: 3430820 kB' 'Active(anon): 7210208 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3430820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486380 kB' 'Mapped: 152588 kB' 'Shmem: 6726736 kB' 'KReclaimable: 205004 kB' 'Slab: 572512 kB' 'SReclaimable: 205004 kB' 'SUnreclaim: 367508 kB' 'KernelStack: 16432 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8465824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199540 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.256 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.257 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.258 nr_hugepages=1024 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.258 resv_hugepages=0 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.258 surplus_hugepages=0 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.258 anon_hugepages=0 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 44983640 kB' 'MemAvailable: 48584044 kB' 'Buffers: 11760 kB' 'Cached: 10710364 kB' 'SwapCached: 0 kB' 'Active: 7776204 kB' 'Inactive: 3430820 kB' 'Active(anon): 7212180 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3430820 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488912 kB' 'Mapped: 152588 kB' 'Shmem: 6726776 kB' 'KReclaimable: 205004 kB' 'Slab: 572488 kB' 'SReclaimable: 205004 kB' 'SUnreclaim: 367484 kB' 'KernelStack: 16480 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8469904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199540 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.258 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.259 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32587064 kB' 'MemFree: 25670452 kB' 'MemUsed: 6916612 kB' 'SwapCached: 0 kB' 'Active: 3410912 kB' 'Inactive: 103956 kB' 'Active(anon): 2982952 kB' 'Inactive(anon): 0 kB' 'Active(file): 427960 kB' 'Inactive(file): 103956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3363012 kB' 'Mapped: 86936 kB' 'AnonPages: 154988 kB' 'Shmem: 2831096 kB' 'KernelStack: 9032 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117376 kB' 'Slab: 321800 kB' 'SReclaimable: 117376 kB' 'SUnreclaim: 204424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.260 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.261 node0=1024 expecting 1024 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.261 00:04:03.261 real 0m3.728s 00:04:03.261 user 0m1.365s 00:04:03.261 sys 0m2.432s 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:03.261 02:30:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:03.261 ************************************ 00:04:03.261 END TEST default_setup 00:04:03.261 ************************************ 00:04:03.261 02:30:06 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:03.261 02:30:06 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:03.261 02:30:06 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:03.261 02:30:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.261 ************************************ 00:04:03.261 START TEST per_node_1G_alloc 00:04:03.261 ************************************ 00:04:03.261 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:04:03.261 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:03.261 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:03.261 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:03.261 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:03.261 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.262 02:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:06.555 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:04:06.555 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.555 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 44954684 kB' 'MemAvailable: 48555508 kB' 'Buffers: 11760 kB' 'Cached: 10710560 kB' 'SwapCached: 0 kB' 'Active: 7784208 kB' 'Inactive: 3431212 kB' 'Active(anon): 7220184 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3431212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496252 kB' 'Mapped: 154116 kB' 'Shmem: 6727084 kB' 'KReclaimable: 205060 kB' 'Slab: 573152 kB' 'SReclaimable: 205060 kB' 'SUnreclaim: 368092 kB' 'KernelStack: 16624 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8487340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199928 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.555 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.556 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.821 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 44957480 kB' 'MemAvailable: 48558304 kB' 'Buffers: 11760 kB' 'Cached: 10710564 kB' 'SwapCached: 0 kB' 'Active: 7782660 kB' 'Inactive: 3431212 kB' 'Active(anon): 7218636 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3431212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494712 kB' 'Mapped: 154080 kB' 'Shmem: 6727088 kB' 'KReclaimable: 205060 kB' 'Slab: 573184 kB' 'SReclaimable: 205060 kB' 'SUnreclaim: 368124 kB' 'KernelStack: 16432 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8474580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199864 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.822 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.823 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.824 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 44957388 kB' 'MemAvailable: 48558212 kB' 'Buffers: 11760 kB' 'Cached: 10710584 kB' 'SwapCached: 0 kB' 'Active: 7782160 kB' 'Inactive: 3431212 kB' 'Active(anon): 7218136 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3431212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494244 kB' 'Mapped: 154064 kB' 'Shmem: 6727108 kB' 'KReclaimable: 205060 kB' 'Slab: 573204 kB' 'SReclaimable: 205060 kB' 'SUnreclaim: 368144 kB' 'KernelStack: 16432 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8474740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199784 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.825 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.826 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.827 nr_hugepages=1024 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.827 resv_hugepages=0 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.827 surplus_hugepages=0 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.827 anon_hugepages=0 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 44957948 kB' 'MemAvailable: 48558772 kB' 'Buffers: 11760 kB' 'Cached: 10710604 kB' 'SwapCached: 0 kB' 'Active: 7782208 kB' 'Inactive: 3431212 kB' 'Active(anon): 7218184 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3431212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494220 kB' 'Mapped: 154064 kB' 'Shmem: 6727128 kB' 'KReclaimable: 205060 kB' 'Slab: 573204 kB' 'SReclaimable: 205060 kB' 'SUnreclaim: 368144 kB' 'KernelStack: 16416 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8474764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199784 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.827 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.828 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32587064 kB' 'MemFree: 26698484 kB' 'MemUsed: 5888580 kB' 'SwapCached: 0 kB' 'Active: 3417276 kB' 'Inactive: 103956 kB' 'Active(anon): 2989316 kB' 'Inactive(anon): 0 kB' 'Active(file): 427960 kB' 'Inactive(file): 103956 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3363152 kB' 'Mapped: 87192 kB' 'AnonPages: 161256 kB' 'Shmem: 2831236 kB' 'KernelStack: 9128 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117376 kB' 'Slab: 321948 kB' 'SReclaimable: 117376 kB' 'SUnreclaim: 204572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.829 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27708560 kB' 'MemFree: 18259716 kB' 'MemUsed: 9448844 kB' 'SwapCached: 0 kB' 'Active: 4365524 kB' 'Inactive: 3327256 kB' 'Active(anon): 4229460 kB' 'Inactive(anon): 0 kB' 'Active(file): 136064 kB' 'Inactive(file): 3327256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7359264 kB' 'Mapped: 66872 kB' 'AnonPages: 333592 kB' 'Shmem: 3895944 kB' 'KernelStack: 7288 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87684 kB' 'Slab: 251240 kB' 'SReclaimable: 87684 kB' 'SUnreclaim: 163556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.830 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.831 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.832 node0=512 expecting 512 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:06.832 node1=512 expecting 512 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:06.832 00:04:06.832 real 0m3.525s 00:04:06.832 user 0m1.351s 00:04:06.832 sys 0m2.259s 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:06.832 02:30:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.832 ************************************ 00:04:06.832 END TEST per_node_1G_alloc 00:04:06.832 ************************************ 00:04:06.832 02:30:10 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:06.832 02:30:10 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:06.832 02:30:10 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:06.832 02:30:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.091 ************************************ 00:04:07.091 START TEST even_2G_alloc 00:04:07.091 ************************************ 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:07.091 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.092 02:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:10.389 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:04:10.389 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.389 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.389 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:10.389 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.389 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 44959140 kB' 'MemAvailable: 48561352 kB' 'Buffers: 11760 kB' 'Cached: 10712136 kB' 'SwapCached: 0 kB' 'Active: 7773412 kB' 'Inactive: 3432628 kB' 'Active(anon): 7209388 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3432628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485100 kB' 'Mapped: 151120 kB' 'Shmem: 6727244 kB' 'KReclaimable: 205004 kB' 'Slab: 573020 kB' 'SReclaimable: 205004 kB' 'SUnreclaim: 368016 kB' 'KernelStack: 16320 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8428728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199508 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.390 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 44961320 kB' 'MemAvailable: 48563532 kB' 'Buffers: 11760 kB' 'Cached: 10712140 kB' 'SwapCached: 0 kB' 'Active: 7773204 kB' 'Inactive: 3432628 kB' 'Active(anon): 7209180 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3432628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485380 kB' 'Mapped: 151024 kB' 'Shmem: 6727248 kB' 'KReclaimable: 205004 kB' 'Slab: 573004 kB' 'SReclaimable: 205004 kB' 'SUnreclaim: 368000 kB' 'KernelStack: 16304 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8427388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199492 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.391 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.392 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 44959312 kB' 'MemAvailable: 48561524 kB' 'Buffers: 11760 kB' 'Cached: 10712160 kB' 'SwapCached: 0 kB' 'Active: 7772692 kB' 'Inactive: 3432628 kB' 'Active(anon): 7208668 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3432628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484796 kB' 'Mapped: 151024 kB' 'Shmem: 6727268 kB' 'KReclaimable: 205004 kB' 'Slab: 573004 kB' 'SReclaimable: 205004 kB' 'SUnreclaim: 368000 kB' 'KernelStack: 16320 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8427412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199508 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.393 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.394 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.395 nr_hugepages=1024 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.395 resv_hugepages=0 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.395 surplus_hugepages=0 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.395 anon_hugepages=0 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 44959060 kB' 'MemAvailable: 48561272 kB' 'Buffers: 11760 kB' 'Cached: 10712180 kB' 'SwapCached: 0 kB' 'Active: 7772640 kB' 'Inactive: 3432628 kB' 'Active(anon): 7208616 kB' 'Inactive(anon): 0 kB' 'Active(file): 564024 kB' 'Inactive(file): 3432628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484688 kB' 'Mapped: 151024 kB' 'Shmem: 6727288 kB' 'KReclaimable: 205004 kB' 'Slab: 573004 kB' 'SReclaimable: 205004 kB' 'SUnreclaim: 368000 kB' 'KernelStack: 16304 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8426992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199492 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.395 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.396 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32587064 kB' 'MemFree: 26698832 kB' 'MemUsed: 5888232 kB' 'SwapCached: 0 kB' 'Active: 3410728 kB' 'Inactive: 105372 kB' 'Active(anon): 2982768 kB' 'Inactive(anon): 0 kB' 'Active(file): 427960 kB' 'Inactive(file): 105372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3364660 kB' 'Mapped: 85080 kB' 'AnonPages: 154708 kB' 'Shmem: 2831328 kB' 'KernelStack: 9048 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117280 kB' 'Slab: 321856 kB' 'SReclaimable: 117280 kB' 'SUnreclaim: 204576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.397 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.398 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27708560 kB' 'MemFree: 18258284 kB' 'MemUsed: 9450276 kB' 'SwapCached: 0 kB' 'Active: 4363164 kB' 'Inactive: 3327256 kB' 'Active(anon): 4227100 kB' 'Inactive(anon): 0 kB' 'Active(file): 136064 kB' 'Inactive(file): 3327256 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7359304 kB' 'Mapped: 65944 kB' 'AnonPages: 331308 kB' 'Shmem: 3895984 kB' 'KernelStack: 7224 kB' 'PageTables: 3452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87724 kB' 'Slab: 251148 kB' 'SReclaimable: 87724 kB' 'SUnreclaim: 163424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.399 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.400 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:10.659 node0=512 expecting 512 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:10.659 node1=512 expecting 512 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:10.659 00:04:10.659 real 0m3.555s 00:04:10.659 user 0m1.339s 00:04:10.659 sys 0m2.300s 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:10.659 02:30:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.659 ************************************ 00:04:10.659 END TEST even_2G_alloc 00:04:10.659 ************************************ 00:04:10.660 02:30:13 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:10.660 02:30:13 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:10.660 02:30:13 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:10.660 02:30:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.660 ************************************ 00:04:10.660 START TEST odd_alloc 00:04:10.660 ************************************ 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.660 02:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:13.961 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:04:13.961 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:13.961 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.961 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 45039944 kB' 'MemAvailable: 48643864 kB' 'Buffers: 12800 kB' 'Cached: 10712456 kB' 'SwapCached: 0 kB' 'Active: 7752368 kB' 'Inactive: 3434260 kB' 'Active(anon): 7188264 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464668 kB' 'Mapped: 149556 kB' 'Shmem: 6726892 kB' 'KReclaimable: 204996 kB' 'Slab: 573524 kB' 'SReclaimable: 204996 kB' 'SUnreclaim: 368528 kB' 'KernelStack: 16208 kB' 'PageTables: 7252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486816 kB' 'Committed_AS: 8402768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199548 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.962 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 45040476 kB' 'MemAvailable: 48644396 kB' 'Buffers: 12800 kB' 'Cached: 10712456 kB' 'SwapCached: 0 kB' 'Active: 7752368 kB' 'Inactive: 3434260 kB' 'Active(anon): 7188264 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464552 kB' 'Mapped: 149464 kB' 'Shmem: 6726892 kB' 'KReclaimable: 204996 kB' 'Slab: 573468 kB' 'SReclaimable: 204996 kB' 'SUnreclaim: 368472 kB' 'KernelStack: 16208 kB' 'PageTables: 7224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486816 kB' 'Committed_AS: 8402784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199516 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.963 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.964 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 45040764 kB' 'MemAvailable: 48644684 kB' 'Buffers: 12800 kB' 'Cached: 10712456 kB' 'SwapCached: 0 kB' 'Active: 7752032 kB' 'Inactive: 3434260 kB' 'Active(anon): 7187928 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464216 kB' 'Mapped: 149464 kB' 'Shmem: 6726892 kB' 'KReclaimable: 204996 kB' 'Slab: 573468 kB' 'SReclaimable: 204996 kB' 'SUnreclaim: 368472 kB' 'KernelStack: 16192 kB' 'PageTables: 7176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486816 kB' 'Committed_AS: 8402804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199516 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.965 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.966 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:13.967 nr_hugepages=1025 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.967 resv_hugepages=0 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.967 surplus_hugepages=0 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.967 anon_hugepages=0 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 45040764 kB' 'MemAvailable: 48644684 kB' 'Buffers: 12800 kB' 'Cached: 10712496 kB' 'SwapCached: 0 kB' 'Active: 7752316 kB' 'Inactive: 3434260 kB' 'Active(anon): 7188212 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464472 kB' 'Mapped: 149464 kB' 'Shmem: 6726932 kB' 'KReclaimable: 204996 kB' 'Slab: 573468 kB' 'SReclaimable: 204996 kB' 'SUnreclaim: 368472 kB' 'KernelStack: 16192 kB' 'PageTables: 7176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486816 kB' 'Committed_AS: 8402824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199516 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.967 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.229 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.230 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32587064 kB' 'MemFree: 26756944 kB' 'MemUsed: 5830120 kB' 'SwapCached: 0 kB' 'Active: 3409872 kB' 'Inactive: 106992 kB' 'Active(anon): 2981832 kB' 'Inactive(anon): 0 kB' 'Active(file): 428040 kB' 'Inactive(file): 106992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3366508 kB' 'Mapped: 83868 kB' 'AnonPages: 153580 kB' 'Shmem: 2831476 kB' 'KernelStack: 9032 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117272 kB' 'Slab: 322112 kB' 'SReclaimable: 117272 kB' 'SUnreclaim: 204840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.231 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27708560 kB' 'MemFree: 18284052 kB' 'MemUsed: 9424508 kB' 'SwapCached: 0 kB' 'Active: 4342520 kB' 'Inactive: 3327268 kB' 'Active(anon): 4206456 kB' 'Inactive(anon): 0 kB' 'Active(file): 136064 kB' 'Inactive(file): 3327268 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7358808 kB' 'Mapped: 65596 kB' 'AnonPages: 310980 kB' 'Shmem: 3895476 kB' 'KernelStack: 7176 kB' 'PageTables: 3328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87724 kB' 'Slab: 251356 kB' 'SReclaimable: 87724 kB' 'SUnreclaim: 163632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.232 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.233 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:14.234 node0=512 expecting 513 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:14.234 node1=513 expecting 512 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:14.234 00:04:14.234 real 0m3.549s 00:04:14.234 user 0m1.330s 00:04:14.234 sys 0m2.302s 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:14.234 02:30:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.234 ************************************ 00:04:14.234 END TEST odd_alloc 00:04:14.234 ************************************ 00:04:14.234 02:30:17 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:14.234 02:30:17 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:14.234 02:30:17 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:14.234 02:30:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.234 ************************************ 00:04:14.234 START TEST custom_alloc 00:04:14.234 ************************************ 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.234 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.235 02:30:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:17.532 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:04:17.532 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.532 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 43979244 kB' 'MemAvailable: 47583160 kB' 'Buffers: 12800 kB' 'Cached: 10712600 kB' 'SwapCached: 0 kB' 'Active: 7756172 kB' 'Inactive: 3434260 kB' 'Active(anon): 7192068 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468044 kB' 'Mapped: 149580 kB' 'Shmem: 6727036 kB' 'KReclaimable: 204988 kB' 'Slab: 573592 kB' 'SReclaimable: 204988 kB' 'SUnreclaim: 368604 kB' 'KernelStack: 16240 kB' 'PageTables: 7296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963552 kB' 'Committed_AS: 8403172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199516 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.532 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.533 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 43978572 kB' 'MemAvailable: 47582488 kB' 'Buffers: 12800 kB' 'Cached: 10712604 kB' 'SwapCached: 0 kB' 'Active: 7755140 kB' 'Inactive: 3434260 kB' 'Active(anon): 7191036 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467408 kB' 'Mapped: 149476 kB' 'Shmem: 6727040 kB' 'KReclaimable: 204988 kB' 'Slab: 573568 kB' 'SReclaimable: 204988 kB' 'SUnreclaim: 368580 kB' 'KernelStack: 16208 kB' 'PageTables: 7180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963552 kB' 'Committed_AS: 8403188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199484 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.534 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.535 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 43979068 kB' 'MemAvailable: 47582984 kB' 'Buffers: 12800 kB' 'Cached: 10712620 kB' 'SwapCached: 0 kB' 'Active: 7755048 kB' 'Inactive: 3434260 kB' 'Active(anon): 7190944 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467340 kB' 'Mapped: 149476 kB' 'Shmem: 6727056 kB' 'KReclaimable: 204988 kB' 'Slab: 573568 kB' 'SReclaimable: 204988 kB' 'SUnreclaim: 368580 kB' 'KernelStack: 16224 kB' 'PageTables: 7228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963552 kB' 'Committed_AS: 8404088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199484 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.536 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.537 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:17.538 nr_hugepages=1536 00:04:17.538 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.539 resv_hugepages=0 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.539 surplus_hugepages=0 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.539 anon_hugepages=0 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 43979180 kB' 'MemAvailable: 47583096 kB' 'Buffers: 12800 kB' 'Cached: 10712644 kB' 'SwapCached: 0 kB' 'Active: 7754972 kB' 'Inactive: 3434260 kB' 'Active(anon): 7190868 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467276 kB' 'Mapped: 149536 kB' 'Shmem: 6727080 kB' 'KReclaimable: 204988 kB' 'Slab: 573568 kB' 'SReclaimable: 204988 kB' 'SUnreclaim: 368580 kB' 'KernelStack: 16224 kB' 'PageTables: 7224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963552 kB' 'Committed_AS: 8403232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199484 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.539 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.540 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32587064 kB' 'MemFree: 26744204 kB' 'MemUsed: 5842860 kB' 'SwapCached: 0 kB' 'Active: 3411296 kB' 'Inactive: 106992 kB' 'Active(anon): 2983256 kB' 'Inactive(anon): 0 kB' 'Active(file): 428040 kB' 'Inactive(file): 106992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3366652 kB' 'Mapped: 83880 kB' 'AnonPages: 154852 kB' 'Shmem: 2831620 kB' 'KernelStack: 9016 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117272 kB' 'Slab: 322296 kB' 'SReclaimable: 117272 kB' 'SUnreclaim: 205024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.541 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.542 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27708560 kB' 'MemFree: 17245764 kB' 'MemUsed: 10462796 kB' 'SwapCached: 0 kB' 'Active: 4343616 kB' 'Inactive: 3327268 kB' 'Active(anon): 4207552 kB' 'Inactive(anon): 0 kB' 'Active(file): 136064 kB' 'Inactive(file): 3327268 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7358812 kB' 'Mapped: 65612 kB' 'AnonPages: 312288 kB' 'Shmem: 3895480 kB' 'KernelStack: 7128 kB' 'PageTables: 3216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87716 kB' 'Slab: 251272 kB' 'SReclaimable: 87716 kB' 'SUnreclaim: 163556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.543 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:17.544 node0=512 expecting 512 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:17.544 node1=1024 expecting 1024 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:17.544 00:04:17.544 real 0m3.255s 00:04:17.544 user 0m1.216s 00:04:17.544 sys 0m2.103s 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:17.544 02:30:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.544 ************************************ 00:04:17.544 END TEST custom_alloc 00:04:17.544 ************************************ 00:04:17.544 02:30:20 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:17.544 02:30:20 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:17.544 02:30:20 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:17.544 02:30:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.544 ************************************ 00:04:17.544 START TEST no_shrink_alloc 00:04:17.544 ************************************ 00:04:17.544 02:30:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:04:17.544 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.545 02:30:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:20.836 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:04:20.836 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:20.836 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.836 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 45045072 kB' 'MemAvailable: 48648988 kB' 'Buffers: 12800 kB' 'Cached: 10712748 kB' 'SwapCached: 0 kB' 'Active: 7754856 kB' 'Inactive: 3434260 kB' 'Active(anon): 7190752 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466380 kB' 'Mapped: 149612 kB' 'Shmem: 6727184 kB' 'KReclaimable: 204988 kB' 'Slab: 573516 kB' 'SReclaimable: 204988 kB' 'SUnreclaim: 368528 kB' 'KernelStack: 16240 kB' 'PageTables: 7304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8403868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199548 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.837 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 45045656 kB' 'MemAvailable: 48649572 kB' 'Buffers: 12800 kB' 'Cached: 10712752 kB' 'SwapCached: 0 kB' 'Active: 7754152 kB' 'Inactive: 3434260 kB' 'Active(anon): 7190048 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466108 kB' 'Mapped: 149524 kB' 'Shmem: 6727188 kB' 'KReclaimable: 204988 kB' 'Slab: 573532 kB' 'SReclaimable: 204988 kB' 'SUnreclaim: 368544 kB' 'KernelStack: 16224 kB' 'PageTables: 7240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8403888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199532 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.102 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.103 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 45046684 kB' 'MemAvailable: 48650600 kB' 'Buffers: 12800 kB' 'Cached: 10712752 kB' 'SwapCached: 0 kB' 'Active: 7753836 kB' 'Inactive: 3434260 kB' 'Active(anon): 7189732 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465796 kB' 'Mapped: 149524 kB' 'Shmem: 6727188 kB' 'KReclaimable: 204988 kB' 'Slab: 573532 kB' 'SReclaimable: 204988 kB' 'SUnreclaim: 368544 kB' 'KernelStack: 16224 kB' 'PageTables: 7240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8403908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199532 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.104 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.105 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.106 nr_hugepages=1024 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.106 resv_hugepages=0 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.106 surplus_hugepages=0 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.106 anon_hugepages=0 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 45046940 kB' 'MemAvailable: 48650856 kB' 'Buffers: 12800 kB' 'Cached: 10712752 kB' 'SwapCached: 0 kB' 'Active: 7754376 kB' 'Inactive: 3434260 kB' 'Active(anon): 7190272 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466340 kB' 'Mapped: 149524 kB' 'Shmem: 6727188 kB' 'KReclaimable: 204988 kB' 'Slab: 573532 kB' 'SReclaimable: 204988 kB' 'SUnreclaim: 368544 kB' 'KernelStack: 16240 kB' 'PageTables: 7288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8403932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199532 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.106 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.107 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32587064 kB' 'MemFree: 25700248 kB' 'MemUsed: 6886816 kB' 'SwapCached: 0 kB' 'Active: 3410856 kB' 'Inactive: 106992 kB' 'Active(anon): 2982816 kB' 'Inactive(anon): 0 kB' 'Active(file): 428040 kB' 'Inactive(file): 106992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3366816 kB' 'Mapped: 83928 kB' 'AnonPages: 154204 kB' 'Shmem: 2831784 kB' 'KernelStack: 9048 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117272 kB' 'Slab: 322456 kB' 'SReclaimable: 117272 kB' 'SUnreclaim: 205184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.108 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.109 node0=1024 expecting 1024 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.109 02:30:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:24.451 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:04:24.451 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:24.451 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.451 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 45040968 kB' 'MemAvailable: 48644884 kB' 'Buffers: 12800 kB' 'Cached: 10712880 kB' 'SwapCached: 0 kB' 'Active: 7755336 kB' 'Inactive: 3434260 kB' 'Active(anon): 7191232 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466680 kB' 'Mapped: 149576 kB' 'Shmem: 6727316 kB' 'KReclaimable: 204988 kB' 'Slab: 573300 kB' 'SReclaimable: 204988 kB' 'SUnreclaim: 368312 kB' 'KernelStack: 16240 kB' 'PageTables: 7276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8404076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199612 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.451 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 45044404 kB' 'MemAvailable: 48648320 kB' 'Buffers: 12800 kB' 'Cached: 10712880 kB' 'SwapCached: 0 kB' 'Active: 7754572 kB' 'Inactive: 3434260 kB' 'Active(anon): 7190468 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466396 kB' 'Mapped: 149476 kB' 'Shmem: 6727316 kB' 'KReclaimable: 204988 kB' 'Slab: 573260 kB' 'SReclaimable: 204988 kB' 'SUnreclaim: 368272 kB' 'KernelStack: 16224 kB' 'PageTables: 7228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8403724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199596 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.452 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.453 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 45044880 kB' 'MemAvailable: 48648796 kB' 'Buffers: 12800 kB' 'Cached: 10712900 kB' 'SwapCached: 0 kB' 'Active: 7755020 kB' 'Inactive: 3434260 kB' 'Active(anon): 7190916 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466804 kB' 'Mapped: 149492 kB' 'Shmem: 6727336 kB' 'KReclaimable: 204988 kB' 'Slab: 573244 kB' 'SReclaimable: 204988 kB' 'SUnreclaim: 368256 kB' 'KernelStack: 16192 kB' 'PageTables: 7156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8405408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199564 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.454 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.455 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.456 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.457 nr_hugepages=1024 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.457 resv_hugepages=0 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.457 surplus_hugepages=0 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.457 anon_hugepages=0 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295624 kB' 'MemFree: 45043228 kB' 'MemAvailable: 48647144 kB' 'Buffers: 12800 kB' 'Cached: 10712920 kB' 'SwapCached: 0 kB' 'Active: 7754996 kB' 'Inactive: 3434260 kB' 'Active(anon): 7190892 kB' 'Inactive(anon): 0 kB' 'Active(file): 564104 kB' 'Inactive(file): 3434260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466756 kB' 'Mapped: 149492 kB' 'Shmem: 6727356 kB' 'KReclaimable: 204988 kB' 'Slab: 573244 kB' 'SReclaimable: 204988 kB' 'SUnreclaim: 368256 kB' 'KernelStack: 16192 kB' 'PageTables: 7332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487840 kB' 'Committed_AS: 8406744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199612 kB' 'VmallocChunk: 0 kB' 'Percpu: 51200 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 533976 kB' 'DirectMap2M: 9627648 kB' 'DirectMap1G: 58720256 kB' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.457 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.458 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32587064 kB' 'MemFree: 25707852 kB' 'MemUsed: 6879212 kB' 'SwapCached: 0 kB' 'Active: 3410992 kB' 'Inactive: 106992 kB' 'Active(anon): 2982952 kB' 'Inactive(anon): 0 kB' 'Active(file): 428040 kB' 'Inactive(file): 106992 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3366924 kB' 'Mapped: 83896 kB' 'AnonPages: 154260 kB' 'Shmem: 2831892 kB' 'KernelStack: 8920 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117272 kB' 'Slab: 322196 kB' 'SReclaimable: 117272 kB' 'SUnreclaim: 204924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.459 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.460 node0=1024 expecting 1024 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.460 00:04:24.460 real 0m6.920s 00:04:24.460 user 0m2.631s 00:04:24.460 sys 0m4.461s 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:24.460 02:30:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.460 ************************************ 00:04:24.460 END TEST no_shrink_alloc 00:04:24.460 ************************************ 00:04:24.460 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:24.460 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:24.460 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.460 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.460 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.460 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.460 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.720 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.720 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.720 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.720 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.720 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.720 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:24.720 02:30:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:24.720 00:04:24.720 real 0m25.273s 00:04:24.720 user 0m9.501s 00:04:24.720 sys 0m16.354s 00:04:24.720 02:30:27 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:24.720 02:30:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.720 ************************************ 00:04:24.720 END TEST hugepages 00:04:24.720 ************************************ 00:04:24.720 02:30:27 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:24.720 02:30:27 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:24.720 02:30:27 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:24.720 02:30:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.720 ************************************ 00:04:24.720 START TEST driver 00:04:24.720 ************************************ 00:04:24.720 02:30:27 setup.sh.driver -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:24.720 * Looking for test storage... 00:04:24.720 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:24.720 02:30:27 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:24.720 02:30:27 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.720 02:30:27 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.996 02:30:32 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:29.996 02:30:32 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:29.996 02:30:32 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:29.996 02:30:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.996 ************************************ 00:04:29.996 START TEST guess_driver 00:04:29.996 ************************************ 00:04:29.996 02:30:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:04:29.996 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:29.996 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:29.996 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:29.996 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:29.996 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:29.996 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 163 > 0 )) 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:29.997 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:29.997 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:29.997 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:29.997 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:29.997 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:29.997 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:29.997 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:29.997 Looking for driver=vfio-pci 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.997 02:30:32 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.288 02:30:36 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.563 00:04:38.563 real 0m8.332s 00:04:38.563 user 0m2.621s 00:04:38.563 sys 0m4.942s 00:04:38.563 02:30:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:38.563 02:30:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.563 ************************************ 00:04:38.563 END TEST guess_driver 00:04:38.563 ************************************ 00:04:38.563 00:04:38.563 real 0m13.204s 00:04:38.563 user 0m4.020s 00:04:38.563 sys 0m7.616s 00:04:38.563 02:30:41 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:38.563 02:30:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.563 ************************************ 00:04:38.563 END TEST driver 00:04:38.563 ************************************ 00:04:38.563 02:30:41 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:38.563 02:30:41 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:38.563 02:30:41 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:38.563 02:30:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.563 ************************************ 00:04:38.563 START TEST devices 00:04:38.563 ************************************ 00:04:38.563 02:30:41 setup.sh.devices -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:38.563 * Looking for test storage... 00:04:38.563 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:38.563 02:30:41 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:38.563 02:30:41 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:38.563 02:30:41 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.563 02:30:41 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.854 02:30:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:41.854 02:30:44 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:04:41.854 02:30:44 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:04:41.854 02:30:44 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:04:41.854 02:30:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:04:41.854 02:30:44 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:04:41.854 02:30:44 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:04:41.854 02:30:44 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.854 02:30:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:41.855 02:30:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:41.855 02:30:44 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:41.855 No valid GPT data, bailing 00:04:41.855 02:30:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:41.855 02:30:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:41.855 02:30:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:41.855 02:30:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:41.855 02:30:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:41.855 02:30:44 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:41.855 02:30:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:41.855 02:30:44 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:41.855 02:30:44 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:41.855 02:30:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:41.855 ************************************ 00:04:41.855 START TEST nvme_mount 00:04:41.855 ************************************ 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:41.855 02:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:42.793 Creating new GPT entries in memory. 00:04:42.793 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:42.793 other utilities. 00:04:42.793 02:30:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:42.793 02:30:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.793 02:30:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.793 02:30:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.793 02:30:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:43.731 Creating new GPT entries in memory. 00:04:43.731 The operation has completed successfully. 00:04:43.731 02:30:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:43.731 02:30:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.731 02:30:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 646263 00:04:43.731 02:30:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.731 02:30:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:43.731 02:30:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.731 02:30:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:43.731 02:30:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:43.731 02:30:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.731 02:30:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.023 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:47.024 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.024 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.283 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:47.283 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:47.283 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:47.283 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:47.283 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:47.283 02:30:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:47.283 02:30:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.542 02:30:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.833 02:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:54.124 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.124 00:04:54.124 real 0m12.346s 00:04:54.124 user 0m3.548s 00:04:54.124 sys 0m6.763s 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:54.124 02:30:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:54.124 ************************************ 00:04:54.124 END TEST nvme_mount 00:04:54.124 ************************************ 00:04:54.124 02:30:57 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:54.124 02:30:57 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:54.124 02:30:57 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:54.124 02:30:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:54.124 ************************************ 00:04:54.124 START TEST dm_mount 00:04:54.124 ************************************ 00:04:54.124 02:30:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:04:54.124 02:30:57 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:54.125 02:30:57 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:55.060 Creating new GPT entries in memory. 00:04:55.060 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:55.060 other utilities. 00:04:55.060 02:30:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:55.060 02:30:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.060 02:30:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:55.060 02:30:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:55.060 02:30:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:56.510 Creating new GPT entries in memory. 00:04:56.510 The operation has completed successfully. 00:04:56.510 02:30:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:56.510 02:30:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.510 02:30:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.510 02:30:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.510 02:30:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:57.446 The operation has completed successfully. 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 650025 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.446 02:31:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:00.737 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.738 02:31:03 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.028 02:31:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.028 02:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.029 02:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:04.029 02:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:04.029 02:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:04.029 02:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:04.029 02:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.029 02:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:04.029 02:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.029 02:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:04.029 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.029 02:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.029 02:31:07 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:04.029 00:05:04.029 real 0m9.822s 00:05:04.029 user 0m2.464s 00:05:04.029 sys 0m4.465s 00:05:04.029 02:31:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:04.029 02:31:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:04.029 ************************************ 00:05:04.029 END TEST dm_mount 00:05:04.029 ************************************ 00:05:04.029 02:31:07 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:04.029 02:31:07 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:04.029 02:31:07 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.029 02:31:07 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.029 02:31:07 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:04.029 02:31:07 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.029 02:31:07 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:04.288 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:04.288 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:04.288 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:04.288 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:04.288 02:31:07 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:04.288 02:31:07 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:04.288 02:31:07 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.288 02:31:07 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.288 02:31:07 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.288 02:31:07 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.288 02:31:07 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:04.288 00:05:04.288 real 0m26.347s 00:05:04.288 user 0m7.342s 00:05:04.288 sys 0m13.984s 00:05:04.288 02:31:07 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:04.288 02:31:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:04.288 ************************************ 00:05:04.288 END TEST devices 00:05:04.288 ************************************ 00:05:04.288 00:05:04.288 real 1m29.499s 00:05:04.288 user 0m28.561s 00:05:04.288 sys 0m52.812s 00:05:04.288 02:31:07 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:04.288 02:31:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:04.288 ************************************ 00:05:04.288 END TEST setup.sh 00:05:04.288 ************************************ 00:05:04.288 02:31:07 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:07.579 Hugepages 00:05:07.579 node hugesize free / total 00:05:07.579 node0 1048576kB 0 / 0 00:05:07.579 node0 2048kB 2048 / 2048 00:05:07.579 node1 1048576kB 0 / 0 00:05:07.579 node1 2048kB 0 / 0 00:05:07.579 00:05:07.579 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:07.579 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:07.579 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:07.579 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:07.579 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:07.579 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:07.579 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:07.579 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:07.579 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:07.579 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:07.579 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:07.579 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:07.579 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:07.838 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:07.838 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:07.838 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:07.838 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:07.838 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:07.838 02:31:10 -- spdk/autotest.sh@130 -- # uname -s 00:05:07.838 02:31:10 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:07.838 02:31:10 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:07.838 02:31:10 -- common/autotest_common.sh@1528 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:11.126 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.126 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:13.032 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:05:13.032 02:31:15 -- common/autotest_common.sh@1529 -- # sleep 1 00:05:13.969 02:31:17 -- common/autotest_common.sh@1530 -- # bdfs=() 00:05:13.969 02:31:17 -- common/autotest_common.sh@1530 -- # local bdfs 00:05:13.969 02:31:17 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:05:13.969 02:31:17 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:05:13.969 02:31:17 -- common/autotest_common.sh@1510 -- # bdfs=() 00:05:13.969 02:31:17 -- common/autotest_common.sh@1510 -- # local bdfs 00:05:13.969 02:31:17 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.969 02:31:17 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:13.969 02:31:17 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:05:13.969 02:31:17 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:05:13.969 02:31:17 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:5e:00.0 00:05:13.969 02:31:17 -- common/autotest_common.sh@1533 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:17.258 Waiting for block devices as requested 00:05:17.258 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:05:17.258 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:17.258 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:17.516 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:17.516 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:17.516 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:17.774 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:17.774 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:17.774 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:18.031 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:18.031 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:18.031 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:18.290 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:18.290 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:18.290 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:18.549 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:18.549 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:18.549 02:31:21 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:05:18.549 02:31:21 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:18.549 02:31:21 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 00:05:18.549 02:31:21 -- common/autotest_common.sh@1499 -- # grep 0000:5e:00.0/nvme/nvme 00:05:18.549 02:31:21 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:18.549 02:31:21 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:18.549 02:31:21 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:18.549 02:31:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:05:18.549 02:31:21 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:05:18.549 02:31:21 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:05:18.549 02:31:21 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:05:18.549 02:31:21 -- common/autotest_common.sh@1542 -- # grep oacs 00:05:18.549 02:31:21 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:05:18.549 02:31:21 -- common/autotest_common.sh@1542 -- # oacs=' 0x5f' 00:05:18.549 02:31:21 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:05:18.549 02:31:21 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:05:18.549 02:31:21 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:05:18.549 02:31:21 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:05:18.549 02:31:21 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:05:18.549 02:31:21 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:05:18.549 02:31:21 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:05:18.549 02:31:21 -- common/autotest_common.sh@1554 -- # continue 00:05:18.549 02:31:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:18.808 02:31:21 -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:18.808 02:31:21 -- common/autotest_common.sh@10 -- # set +x 00:05:18.808 02:31:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:18.808 02:31:21 -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:18.808 02:31:21 -- common/autotest_common.sh@10 -- # set +x 00:05:18.808 02:31:21 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:22.098 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.098 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.098 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.098 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:05:22.099 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:22.099 02:31:25 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:22.099 02:31:25 -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:22.099 02:31:25 -- common/autotest_common.sh@10 -- # set +x 00:05:22.358 02:31:25 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:22.358 02:31:25 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:05:22.358 02:31:25 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:05:22.358 02:31:25 -- common/autotest_common.sh@1574 -- # bdfs=() 00:05:22.358 02:31:25 -- common/autotest_common.sh@1574 -- # local bdfs 00:05:22.358 02:31:25 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:05:22.358 02:31:25 -- common/autotest_common.sh@1510 -- # bdfs=() 00:05:22.358 02:31:25 -- common/autotest_common.sh@1510 -- # local bdfs 00:05:22.358 02:31:25 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:22.358 02:31:25 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:22.358 02:31:25 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:05:22.358 02:31:25 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:05:22.358 02:31:25 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:5e:00.0 00:05:22.358 02:31:25 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:05:22.358 02:31:25 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:22.358 02:31:25 -- common/autotest_common.sh@1577 -- # device=0xa80a 00:05:22.358 02:31:25 -- common/autotest_common.sh@1578 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:22.358 02:31:25 -- common/autotest_common.sh@1583 -- # printf '%s\n' 00:05:22.358 02:31:25 -- common/autotest_common.sh@1589 -- # [[ -z '' ]] 00:05:22.358 02:31:25 -- common/autotest_common.sh@1590 -- # return 0 00:05:22.358 02:31:25 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:22.358 02:31:25 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:22.358 02:31:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:22.358 02:31:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:22.358 02:31:25 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:22.358 02:31:25 -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:22.358 02:31:25 -- common/autotest_common.sh@10 -- # set +x 00:05:22.358 02:31:25 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:22.358 02:31:25 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:22.358 02:31:25 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:22.358 02:31:25 -- common/autotest_common.sh@10 -- # set +x 00:05:22.359 ************************************ 00:05:22.359 START TEST env 00:05:22.359 ************************************ 00:05:22.359 02:31:25 env -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:22.649 * Looking for test storage... 00:05:22.649 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:22.649 02:31:25 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:22.649 02:31:25 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:22.649 02:31:25 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:22.649 02:31:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.649 ************************************ 00:05:22.649 START TEST env_memory 00:05:22.649 ************************************ 00:05:22.649 02:31:25 env.env_memory -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:22.649 00:05:22.649 00:05:22.649 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.649 http://cunit.sourceforge.net/ 00:05:22.649 00:05:22.649 00:05:22.649 Suite: memory 00:05:22.649 Test: alloc and free memory map ...[2024-05-15 02:31:25.776971] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:22.649 passed 00:05:22.649 Test: mem map translation ...[2024-05-15 02:31:25.806362] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:22.649 [2024-05-15 02:31:25.806389] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:22.649 [2024-05-15 02:31:25.806449] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:22.649 [2024-05-15 02:31:25.806463] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:22.649 passed 00:05:22.649 Test: mem map registration ...[2024-05-15 02:31:25.864318] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:22.649 [2024-05-15 02:31:25.864343] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:22.649 passed 00:05:22.910 Test: mem map adjacent registrations ...passed 00:05:22.910 00:05:22.910 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.910 suites 1 1 n/a 0 0 00:05:22.910 tests 4 4 4 0 0 00:05:22.910 asserts 152 152 152 0 n/a 00:05:22.910 00:05:22.910 Elapsed time = 0.207 seconds 00:05:22.910 00:05:22.910 real 0m0.222s 00:05:22.910 user 0m0.206s 00:05:22.910 sys 0m0.014s 00:05:22.910 02:31:25 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:22.910 02:31:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:22.910 ************************************ 00:05:22.910 END TEST env_memory 00:05:22.910 ************************************ 00:05:22.910 02:31:25 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:22.910 02:31:25 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:22.910 02:31:25 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:22.910 02:31:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.910 ************************************ 00:05:22.910 START TEST env_vtophys 00:05:22.910 ************************************ 00:05:22.910 02:31:26 env.env_vtophys -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:22.910 EAL: lib.eal log level changed from notice to debug 00:05:22.910 EAL: Detected lcore 0 as core 0 on socket 0 00:05:22.910 EAL: Detected lcore 1 as core 1 on socket 0 00:05:22.910 EAL: Detected lcore 2 as core 2 on socket 0 00:05:22.910 EAL: Detected lcore 3 as core 3 on socket 0 00:05:22.910 EAL: Detected lcore 4 as core 4 on socket 0 00:05:22.910 EAL: Detected lcore 5 as core 8 on socket 0 00:05:22.910 EAL: Detected lcore 6 as core 9 on socket 0 00:05:22.910 EAL: Detected lcore 7 as core 10 on socket 0 00:05:22.910 EAL: Detected lcore 8 as core 11 on socket 0 00:05:22.910 EAL: Detected lcore 9 as core 16 on socket 0 00:05:22.910 EAL: Detected lcore 10 as core 17 on socket 0 00:05:22.910 EAL: Detected lcore 11 as core 18 on socket 0 00:05:22.910 EAL: Detected lcore 12 as core 19 on socket 0 00:05:22.910 EAL: Detected lcore 13 as core 20 on socket 0 00:05:22.910 EAL: Detected lcore 14 as core 24 on socket 0 00:05:22.910 EAL: Detected lcore 15 as core 25 on socket 0 00:05:22.910 EAL: Detected lcore 16 as core 26 on socket 0 00:05:22.910 EAL: Detected lcore 17 as core 27 on socket 0 00:05:22.910 EAL: Detected lcore 18 as core 0 on socket 1 00:05:22.910 EAL: Detected lcore 19 as core 1 on socket 1 00:05:22.910 EAL: Detected lcore 20 as core 2 on socket 1 00:05:22.910 EAL: Detected lcore 21 as core 3 on socket 1 00:05:22.910 EAL: Detected lcore 22 as core 4 on socket 1 00:05:22.910 EAL: Detected lcore 23 as core 8 on socket 1 00:05:22.910 EAL: Detected lcore 24 as core 9 on socket 1 00:05:22.910 EAL: Detected lcore 25 as core 10 on socket 1 00:05:22.910 EAL: Detected lcore 26 as core 11 on socket 1 00:05:22.910 EAL: Detected lcore 27 as core 16 on socket 1 00:05:22.910 EAL: Detected lcore 28 as core 17 on socket 1 00:05:22.910 EAL: Detected lcore 29 as core 18 on socket 1 00:05:22.910 EAL: Detected lcore 30 as core 19 on socket 1 00:05:22.910 EAL: Detected lcore 31 as core 20 on socket 1 00:05:22.910 EAL: Detected lcore 32 as core 24 on socket 1 00:05:22.910 EAL: Detected lcore 33 as core 25 on socket 1 00:05:22.910 EAL: Detected lcore 34 as core 26 on socket 1 00:05:22.910 EAL: Detected lcore 35 as core 27 on socket 1 00:05:22.910 EAL: Detected lcore 36 as core 0 on socket 0 00:05:22.910 EAL: Detected lcore 37 as core 1 on socket 0 00:05:22.910 EAL: Detected lcore 38 as core 2 on socket 0 00:05:22.910 EAL: Detected lcore 39 as core 3 on socket 0 00:05:22.910 EAL: Detected lcore 40 as core 4 on socket 0 00:05:22.910 EAL: Detected lcore 41 as core 8 on socket 0 00:05:22.910 EAL: Detected lcore 42 as core 9 on socket 0 00:05:22.910 EAL: Detected lcore 43 as core 10 on socket 0 00:05:22.910 EAL: Detected lcore 44 as core 11 on socket 0 00:05:22.910 EAL: Detected lcore 45 as core 16 on socket 0 00:05:22.910 EAL: Detected lcore 46 as core 17 on socket 0 00:05:22.910 EAL: Detected lcore 47 as core 18 on socket 0 00:05:22.910 EAL: Detected lcore 48 as core 19 on socket 0 00:05:22.910 EAL: Detected lcore 49 as core 20 on socket 0 00:05:22.910 EAL: Detected lcore 50 as core 24 on socket 0 00:05:22.910 EAL: Detected lcore 51 as core 25 on socket 0 00:05:22.910 EAL: Detected lcore 52 as core 26 on socket 0 00:05:22.910 EAL: Detected lcore 53 as core 27 on socket 0 00:05:22.910 EAL: Detected lcore 54 as core 0 on socket 1 00:05:22.910 EAL: Detected lcore 55 as core 1 on socket 1 00:05:22.910 EAL: Detected lcore 56 as core 2 on socket 1 00:05:22.910 EAL: Detected lcore 57 as core 3 on socket 1 00:05:22.910 EAL: Detected lcore 58 as core 4 on socket 1 00:05:22.910 EAL: Detected lcore 59 as core 8 on socket 1 00:05:22.910 EAL: Detected lcore 60 as core 9 on socket 1 00:05:22.910 EAL: Detected lcore 61 as core 10 on socket 1 00:05:22.910 EAL: Detected lcore 62 as core 11 on socket 1 00:05:22.910 EAL: Detected lcore 63 as core 16 on socket 1 00:05:22.910 EAL: Detected lcore 64 as core 17 on socket 1 00:05:22.910 EAL: Detected lcore 65 as core 18 on socket 1 00:05:22.910 EAL: Detected lcore 66 as core 19 on socket 1 00:05:22.910 EAL: Detected lcore 67 as core 20 on socket 1 00:05:22.910 EAL: Detected lcore 68 as core 24 on socket 1 00:05:22.910 EAL: Detected lcore 69 as core 25 on socket 1 00:05:22.910 EAL: Detected lcore 70 as core 26 on socket 1 00:05:22.910 EAL: Detected lcore 71 as core 27 on socket 1 00:05:22.910 EAL: Maximum logical cores by configuration: 128 00:05:22.910 EAL: Detected CPU lcores: 72 00:05:22.910 EAL: Detected NUMA nodes: 2 00:05:22.910 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:22.910 EAL: Detected shared linkage of DPDK 00:05:22.910 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:22.910 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:22.910 EAL: Registered [vdev] bus. 00:05:22.910 EAL: bus.vdev log level changed from disabled to notice 00:05:22.910 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:22.910 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:22.910 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:22.910 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:22.910 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:22.910 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:22.910 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:22.911 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:22.911 EAL: No shared files mode enabled, IPC will be disabled 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: Bus pci wants IOVA as 'DC' 00:05:22.911 EAL: Bus vdev wants IOVA as 'DC' 00:05:22.911 EAL: Buses did not request a specific IOVA mode. 00:05:22.911 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:22.911 EAL: Selected IOVA mode 'VA' 00:05:22.911 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.911 EAL: Probing VFIO support... 00:05:22.911 EAL: IOMMU type 1 (Type 1) is supported 00:05:22.911 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:22.911 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:22.911 EAL: VFIO support initialized 00:05:22.911 EAL: Ask a virtual area of 0x2e000 bytes 00:05:22.911 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:22.911 EAL: Setting up physically contiguous memory... 00:05:22.911 EAL: Setting maximum number of open files to 524288 00:05:22.911 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:22.911 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:22.911 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:22.911 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.911 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:22.911 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.911 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.911 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:22.911 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:22.911 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.911 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:22.911 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.911 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.911 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:22.911 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:22.911 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.911 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:22.911 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.911 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.911 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:22.911 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:22.911 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.911 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:22.911 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.911 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.911 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:22.911 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:22.911 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:22.911 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.911 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:22.911 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.911 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.911 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:22.911 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:22.911 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.911 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:22.911 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.911 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.911 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:22.911 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:22.911 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.911 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:22.911 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.911 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.911 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:22.911 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:22.911 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.911 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:22.911 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.911 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.911 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:22.911 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:22.911 EAL: Hugepages will be freed exactly as allocated. 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: TSC frequency is ~2300000 KHz 00:05:22.911 EAL: Main lcore 0 is ready (tid=7f0ae978da00;cpuset=[0]) 00:05:22.911 EAL: Trying to obtain current memory policy. 00:05:22.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.911 EAL: Restoring previous memory policy: 0 00:05:22.911 EAL: request: mp_malloc_sync 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: Heap on socket 0 was expanded by 2MB 00:05:22.911 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:05:22.911 EAL: probe driver: 8086:37d2 net_i40e 00:05:22.911 EAL: Not managed by a supported kernel driver, skipped 00:05:22.911 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:05:22.911 EAL: probe driver: 8086:37d2 net_i40e 00:05:22.911 EAL: Not managed by a supported kernel driver, skipped 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:22.911 EAL: Mem event callback 'spdk:(nil)' registered 00:05:22.911 00:05:22.911 00:05:22.911 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.911 http://cunit.sourceforge.net/ 00:05:22.911 00:05:22.911 00:05:22.911 Suite: components_suite 00:05:22.911 Test: vtophys_malloc_test ...passed 00:05:22.911 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:22.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.911 EAL: Restoring previous memory policy: 4 00:05:22.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.911 EAL: request: mp_malloc_sync 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: Heap on socket 0 was expanded by 4MB 00:05:22.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.911 EAL: request: mp_malloc_sync 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: Heap on socket 0 was shrunk by 4MB 00:05:22.911 EAL: Trying to obtain current memory policy. 00:05:22.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.911 EAL: Restoring previous memory policy: 4 00:05:22.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.911 EAL: request: mp_malloc_sync 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: Heap on socket 0 was expanded by 6MB 00:05:22.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.911 EAL: request: mp_malloc_sync 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: Heap on socket 0 was shrunk by 6MB 00:05:22.911 EAL: Trying to obtain current memory policy. 00:05:22.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.911 EAL: Restoring previous memory policy: 4 00:05:22.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.911 EAL: request: mp_malloc_sync 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: Heap on socket 0 was expanded by 10MB 00:05:22.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.911 EAL: request: mp_malloc_sync 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: Heap on socket 0 was shrunk by 10MB 00:05:22.911 EAL: Trying to obtain current memory policy. 00:05:22.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.911 EAL: Restoring previous memory policy: 4 00:05:22.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.911 EAL: request: mp_malloc_sync 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: Heap on socket 0 was expanded by 18MB 00:05:22.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.911 EAL: request: mp_malloc_sync 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: Heap on socket 0 was shrunk by 18MB 00:05:22.911 EAL: Trying to obtain current memory policy. 00:05:22.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.911 EAL: Restoring previous memory policy: 4 00:05:22.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.911 EAL: request: mp_malloc_sync 00:05:22.911 EAL: No shared files mode enabled, IPC is disabled 00:05:22.911 EAL: Heap on socket 0 was expanded by 34MB 00:05:23.171 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.171 EAL: request: mp_malloc_sync 00:05:23.171 EAL: No shared files mode enabled, IPC is disabled 00:05:23.171 EAL: Heap on socket 0 was shrunk by 34MB 00:05:23.171 EAL: Trying to obtain current memory policy. 00:05:23.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.171 EAL: Restoring previous memory policy: 4 00:05:23.171 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.171 EAL: request: mp_malloc_sync 00:05:23.171 EAL: No shared files mode enabled, IPC is disabled 00:05:23.171 EAL: Heap on socket 0 was expanded by 66MB 00:05:23.171 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.171 EAL: request: mp_malloc_sync 00:05:23.171 EAL: No shared files mode enabled, IPC is disabled 00:05:23.171 EAL: Heap on socket 0 was shrunk by 66MB 00:05:23.171 EAL: Trying to obtain current memory policy. 00:05:23.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.171 EAL: Restoring previous memory policy: 4 00:05:23.171 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.171 EAL: request: mp_malloc_sync 00:05:23.171 EAL: No shared files mode enabled, IPC is disabled 00:05:23.171 EAL: Heap on socket 0 was expanded by 130MB 00:05:23.171 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.171 EAL: request: mp_malloc_sync 00:05:23.171 EAL: No shared files mode enabled, IPC is disabled 00:05:23.171 EAL: Heap on socket 0 was shrunk by 130MB 00:05:23.171 EAL: Trying to obtain current memory policy. 00:05:23.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.171 EAL: Restoring previous memory policy: 4 00:05:23.171 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.171 EAL: request: mp_malloc_sync 00:05:23.171 EAL: No shared files mode enabled, IPC is disabled 00:05:23.171 EAL: Heap on socket 0 was expanded by 258MB 00:05:23.171 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.429 EAL: request: mp_malloc_sync 00:05:23.429 EAL: No shared files mode enabled, IPC is disabled 00:05:23.429 EAL: Heap on socket 0 was shrunk by 258MB 00:05:23.429 EAL: Trying to obtain current memory policy. 00:05:23.429 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.430 EAL: Restoring previous memory policy: 4 00:05:23.430 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.430 EAL: request: mp_malloc_sync 00:05:23.430 EAL: No shared files mode enabled, IPC is disabled 00:05:23.430 EAL: Heap on socket 0 was expanded by 514MB 00:05:23.430 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.689 EAL: request: mp_malloc_sync 00:05:23.689 EAL: No shared files mode enabled, IPC is disabled 00:05:23.689 EAL: Heap on socket 0 was shrunk by 514MB 00:05:23.689 EAL: Trying to obtain current memory policy. 00:05:23.689 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.948 EAL: Restoring previous memory policy: 4 00:05:23.948 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.948 EAL: request: mp_malloc_sync 00:05:23.948 EAL: No shared files mode enabled, IPC is disabled 00:05:23.948 EAL: Heap on socket 0 was expanded by 1026MB 00:05:23.948 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.207 EAL: request: mp_malloc_sync 00:05:24.207 EAL: No shared files mode enabled, IPC is disabled 00:05:24.207 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:24.207 passed 00:05:24.207 00:05:24.207 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.207 suites 1 1 n/a 0 0 00:05:24.207 tests 2 2 2 0 0 00:05:24.207 asserts 497 497 497 0 n/a 00:05:24.207 00:05:24.207 Elapsed time = 1.164 seconds 00:05:24.207 EAL: Calling mem event callback 'spdk:(nil)' 00:05:24.207 EAL: request: mp_malloc_sync 00:05:24.207 EAL: No shared files mode enabled, IPC is disabled 00:05:24.207 EAL: Heap on socket 0 was shrunk by 2MB 00:05:24.207 EAL: No shared files mode enabled, IPC is disabled 00:05:24.207 EAL: No shared files mode enabled, IPC is disabled 00:05:24.207 EAL: No shared files mode enabled, IPC is disabled 00:05:24.207 00:05:24.207 real 0m1.336s 00:05:24.207 user 0m0.749s 00:05:24.207 sys 0m0.555s 00:05:24.207 02:31:27 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:24.207 02:31:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:24.207 ************************************ 00:05:24.207 END TEST env_vtophys 00:05:24.207 ************************************ 00:05:24.207 02:31:27 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:24.207 02:31:27 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:24.207 02:31:27 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:24.207 02:31:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.207 ************************************ 00:05:24.207 START TEST env_pci 00:05:24.207 ************************************ 00:05:24.207 02:31:27 env.env_pci -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:24.207 00:05:24.207 00:05:24.207 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.207 http://cunit.sourceforge.net/ 00:05:24.207 00:05:24.207 00:05:24.207 Suite: pci 00:05:24.207 Test: pci_hook ...[2024-05-15 02:31:27.490690] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 658584 has claimed it 00:05:24.467 EAL: Cannot find device (10000:00:01.0) 00:05:24.467 EAL: Failed to attach device on primary process 00:05:24.467 passed 00:05:24.467 00:05:24.467 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.467 suites 1 1 n/a 0 0 00:05:24.467 tests 1 1 1 0 0 00:05:24.467 asserts 25 25 25 0 n/a 00:05:24.467 00:05:24.467 Elapsed time = 0.040 seconds 00:05:24.467 00:05:24.467 real 0m0.062s 00:05:24.467 user 0m0.015s 00:05:24.467 sys 0m0.046s 00:05:24.467 02:31:27 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:24.467 02:31:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:24.467 ************************************ 00:05:24.467 END TEST env_pci 00:05:24.467 ************************************ 00:05:24.467 02:31:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:24.467 02:31:27 env -- env/env.sh@15 -- # uname 00:05:24.467 02:31:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:24.467 02:31:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:24.467 02:31:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.467 02:31:27 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:05:24.467 02:31:27 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:24.467 02:31:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.467 ************************************ 00:05:24.467 START TEST env_dpdk_post_init 00:05:24.467 ************************************ 00:05:24.467 02:31:27 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.467 EAL: Detected CPU lcores: 72 00:05:24.467 EAL: Detected NUMA nodes: 2 00:05:24.467 EAL: Detected shared linkage of DPDK 00:05:24.467 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.467 EAL: Selected IOVA mode 'VA' 00:05:24.467 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.467 EAL: VFIO support initialized 00:05:24.467 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.726 EAL: Using IOMMU type 1 (Type 1) 00:05:24.726 EAL: Ignore mapping IO port bar(1) 00:05:24.726 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:24.726 EAL: Ignore mapping IO port bar(1) 00:05:24.726 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:24.726 EAL: Ignore mapping IO port bar(1) 00:05:24.726 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:24.726 EAL: Ignore mapping IO port bar(1) 00:05:24.726 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:24.726 EAL: Ignore mapping IO port bar(1) 00:05:24.726 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:24.726 EAL: Ignore mapping IO port bar(1) 00:05:24.726 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:24.726 EAL: Ignore mapping IO port bar(1) 00:05:24.726 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:24.726 EAL: Ignore mapping IO port bar(1) 00:05:24.726 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:24.986 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:5e:00.0 (socket 0) 00:05:24.986 EAL: Ignore mapping IO port bar(1) 00:05:24.986 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:24.986 EAL: Ignore mapping IO port bar(1) 00:05:24.986 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:24.986 EAL: Ignore mapping IO port bar(1) 00:05:24.986 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:24.986 EAL: Ignore mapping IO port bar(1) 00:05:24.986 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:24.986 EAL: Ignore mapping IO port bar(1) 00:05:24.986 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:24.986 EAL: Ignore mapping IO port bar(1) 00:05:24.986 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:24.986 EAL: Ignore mapping IO port bar(1) 00:05:24.986 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:24.986 EAL: Ignore mapping IO port bar(1) 00:05:24.986 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:24.986 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:24.986 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:25.246 Starting DPDK initialization... 00:05:25.246 Starting SPDK post initialization... 00:05:25.246 SPDK NVMe probe 00:05:25.246 Attaching to 0000:5e:00.0 00:05:25.246 Attached to 0000:5e:00.0 00:05:25.246 Cleaning up... 00:05:25.246 00:05:25.246 real 0m0.733s 00:05:25.246 user 0m0.181s 00:05:25.246 sys 0m0.143s 00:05:25.246 02:31:28 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:25.246 02:31:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.246 ************************************ 00:05:25.246 END TEST env_dpdk_post_init 00:05:25.246 ************************************ 00:05:25.246 02:31:28 env -- env/env.sh@26 -- # uname 00:05:25.246 02:31:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:25.246 02:31:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.246 02:31:28 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:25.246 02:31:28 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:25.246 02:31:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.246 ************************************ 00:05:25.246 START TEST env_mem_callbacks 00:05:25.246 ************************************ 00:05:25.246 02:31:28 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.246 EAL: Detected CPU lcores: 72 00:05:25.246 EAL: Detected NUMA nodes: 2 00:05:25.246 EAL: Detected shared linkage of DPDK 00:05:25.246 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:25.506 EAL: Selected IOVA mode 'VA' 00:05:25.506 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.506 EAL: VFIO support initialized 00:05:25.506 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.506 00:05:25.506 00:05:25.506 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.506 http://cunit.sourceforge.net/ 00:05:25.506 00:05:25.506 00:05:25.506 Suite: memory 00:05:25.506 Test: test ... 00:05:25.506 register 0x200000200000 2097152 00:05:25.506 malloc 3145728 00:05:25.506 register 0x200000400000 4194304 00:05:25.506 buf 0x200000500000 len 3145728 PASSED 00:05:25.506 malloc 64 00:05:25.506 buf 0x2000004fff40 len 64 PASSED 00:05:25.506 malloc 4194304 00:05:25.506 register 0x200000800000 6291456 00:05:25.506 buf 0x200000a00000 len 4194304 PASSED 00:05:25.506 free 0x200000500000 3145728 00:05:25.506 free 0x2000004fff40 64 00:05:25.506 unregister 0x200000400000 4194304 PASSED 00:05:25.506 free 0x200000a00000 4194304 00:05:25.506 unregister 0x200000800000 6291456 PASSED 00:05:25.506 malloc 8388608 00:05:25.506 register 0x200000400000 10485760 00:05:25.506 buf 0x200000600000 len 8388608 PASSED 00:05:25.506 free 0x200000600000 8388608 00:05:25.506 unregister 0x200000400000 10485760 PASSED 00:05:25.506 passed 00:05:25.506 00:05:25.506 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.506 suites 1 1 n/a 0 0 00:05:25.506 tests 1 1 1 0 0 00:05:25.506 asserts 15 15 15 0 n/a 00:05:25.506 00:05:25.506 Elapsed time = 0.009 seconds 00:05:25.506 00:05:25.506 real 0m0.081s 00:05:25.506 user 0m0.022s 00:05:25.506 sys 0m0.059s 00:05:25.506 02:31:28 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:25.506 02:31:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:25.506 ************************************ 00:05:25.506 END TEST env_mem_callbacks 00:05:25.506 ************************************ 00:05:25.506 00:05:25.506 real 0m3.031s 00:05:25.506 user 0m1.396s 00:05:25.506 sys 0m1.213s 00:05:25.506 02:31:28 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:25.506 02:31:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.506 ************************************ 00:05:25.506 END TEST env 00:05:25.506 ************************************ 00:05:25.506 02:31:28 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.506 02:31:28 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:25.506 02:31:28 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:25.506 02:31:28 -- common/autotest_common.sh@10 -- # set +x 00:05:25.506 ************************************ 00:05:25.506 START TEST rpc 00:05:25.506 ************************************ 00:05:25.506 02:31:28 rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.506 * Looking for test storage... 00:05:25.765 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:25.765 02:31:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=658890 00:05:25.765 02:31:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.765 02:31:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:25.765 02:31:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 658890 00:05:25.765 02:31:28 rpc -- common/autotest_common.sh@828 -- # '[' -z 658890 ']' 00:05:25.765 02:31:28 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.765 02:31:28 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:25.765 02:31:28 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.765 02:31:28 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:25.765 02:31:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.765 [2024-05-15 02:31:28.865428] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:25.765 [2024-05-15 02:31:28.865499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658890 ] 00:05:25.765 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.765 [2024-05-15 02:31:28.974473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.765 [2024-05-15 02:31:29.021796] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:25.765 [2024-05-15 02:31:29.021846] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 658890' to capture a snapshot of events at runtime. 00:05:25.766 [2024-05-15 02:31:29.021861] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:25.766 [2024-05-15 02:31:29.021873] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:25.766 [2024-05-15 02:31:29.021884] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid658890 for offline analysis/debug. 00:05:25.766 [2024-05-15 02:31:29.021920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.025 02:31:29 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:26.025 02:31:29 rpc -- common/autotest_common.sh@861 -- # return 0 00:05:26.025 02:31:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:26.025 02:31:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:26.025 02:31:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:26.025 02:31:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:26.025 02:31:29 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:26.025 02:31:29 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:26.025 02:31:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.025 ************************************ 00:05:26.025 START TEST rpc_integrity 00:05:26.025 ************************************ 00:05:26.025 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:05:26.025 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.025 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.025 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.025 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.025 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.025 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.284 { 00:05:26.284 "name": "Malloc0", 00:05:26.284 "aliases": [ 00:05:26.284 "3e2faa30-39b4-41d4-b27e-2d6f64cd296f" 00:05:26.284 ], 00:05:26.284 "product_name": "Malloc disk", 00:05:26.284 "block_size": 512, 00:05:26.284 "num_blocks": 16384, 00:05:26.284 "uuid": "3e2faa30-39b4-41d4-b27e-2d6f64cd296f", 00:05:26.284 "assigned_rate_limits": { 00:05:26.284 "rw_ios_per_sec": 0, 00:05:26.284 "rw_mbytes_per_sec": 0, 00:05:26.284 "r_mbytes_per_sec": 0, 00:05:26.284 "w_mbytes_per_sec": 0 00:05:26.284 }, 00:05:26.284 "claimed": false, 00:05:26.284 "zoned": false, 00:05:26.284 "supported_io_types": { 00:05:26.284 "read": true, 00:05:26.284 "write": true, 00:05:26.284 "unmap": true, 00:05:26.284 "write_zeroes": true, 00:05:26.284 "flush": true, 00:05:26.284 "reset": true, 00:05:26.284 "compare": false, 00:05:26.284 "compare_and_write": false, 00:05:26.284 "abort": true, 00:05:26.284 "nvme_admin": false, 00:05:26.284 "nvme_io": false 00:05:26.284 }, 00:05:26.284 "memory_domains": [ 00:05:26.284 { 00:05:26.284 "dma_device_id": "system", 00:05:26.284 "dma_device_type": 1 00:05:26.284 }, 00:05:26.284 { 00:05:26.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.284 "dma_device_type": 2 00:05:26.284 } 00:05:26.284 ], 00:05:26.284 "driver_specific": {} 00:05:26.284 } 00:05:26.284 ]' 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.284 [2024-05-15 02:31:29.410607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.284 [2024-05-15 02:31:29.410643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.284 [2024-05-15 02:31:29.410662] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x69ec90 00:05:26.284 [2024-05-15 02:31:29.410675] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.284 [2024-05-15 02:31:29.412176] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.284 [2024-05-15 02:31:29.412203] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.284 Passthru0 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.284 { 00:05:26.284 "name": "Malloc0", 00:05:26.284 "aliases": [ 00:05:26.284 "3e2faa30-39b4-41d4-b27e-2d6f64cd296f" 00:05:26.284 ], 00:05:26.284 "product_name": "Malloc disk", 00:05:26.284 "block_size": 512, 00:05:26.284 "num_blocks": 16384, 00:05:26.284 "uuid": "3e2faa30-39b4-41d4-b27e-2d6f64cd296f", 00:05:26.284 "assigned_rate_limits": { 00:05:26.284 "rw_ios_per_sec": 0, 00:05:26.284 "rw_mbytes_per_sec": 0, 00:05:26.284 "r_mbytes_per_sec": 0, 00:05:26.284 "w_mbytes_per_sec": 0 00:05:26.284 }, 00:05:26.284 "claimed": true, 00:05:26.284 "claim_type": "exclusive_write", 00:05:26.284 "zoned": false, 00:05:26.284 "supported_io_types": { 00:05:26.284 "read": true, 00:05:26.284 "write": true, 00:05:26.284 "unmap": true, 00:05:26.284 "write_zeroes": true, 00:05:26.284 "flush": true, 00:05:26.284 "reset": true, 00:05:26.284 "compare": false, 00:05:26.284 "compare_and_write": false, 00:05:26.284 "abort": true, 00:05:26.284 "nvme_admin": false, 00:05:26.284 "nvme_io": false 00:05:26.284 }, 00:05:26.284 "memory_domains": [ 00:05:26.284 { 00:05:26.284 "dma_device_id": "system", 00:05:26.284 "dma_device_type": 1 00:05:26.284 }, 00:05:26.284 { 00:05:26.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.284 "dma_device_type": 2 00:05:26.284 } 00:05:26.284 ], 00:05:26.284 "driver_specific": {} 00:05:26.284 }, 00:05:26.284 { 00:05:26.284 "name": "Passthru0", 00:05:26.284 "aliases": [ 00:05:26.284 "ded3ccdd-4d8c-5ec6-b229-8b998e6bcfe7" 00:05:26.284 ], 00:05:26.284 "product_name": "passthru", 00:05:26.284 "block_size": 512, 00:05:26.284 "num_blocks": 16384, 00:05:26.284 "uuid": "ded3ccdd-4d8c-5ec6-b229-8b998e6bcfe7", 00:05:26.284 "assigned_rate_limits": { 00:05:26.284 "rw_ios_per_sec": 0, 00:05:26.284 "rw_mbytes_per_sec": 0, 00:05:26.284 "r_mbytes_per_sec": 0, 00:05:26.284 "w_mbytes_per_sec": 0 00:05:26.284 }, 00:05:26.284 "claimed": false, 00:05:26.284 "zoned": false, 00:05:26.284 "supported_io_types": { 00:05:26.284 "read": true, 00:05:26.284 "write": true, 00:05:26.284 "unmap": true, 00:05:26.284 "write_zeroes": true, 00:05:26.284 "flush": true, 00:05:26.284 "reset": true, 00:05:26.284 "compare": false, 00:05:26.284 "compare_and_write": false, 00:05:26.284 "abort": true, 00:05:26.284 "nvme_admin": false, 00:05:26.284 "nvme_io": false 00:05:26.284 }, 00:05:26.284 "memory_domains": [ 00:05:26.284 { 00:05:26.284 "dma_device_id": "system", 00:05:26.284 "dma_device_type": 1 00:05:26.284 }, 00:05:26.284 { 00:05:26.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.284 "dma_device_type": 2 00:05:26.284 } 00:05:26.284 ], 00:05:26.284 "driver_specific": { 00:05:26.284 "passthru": { 00:05:26.284 "name": "Passthru0", 00:05:26.284 "base_bdev_name": "Malloc0" 00:05:26.284 } 00:05:26.284 } 00:05:26.284 } 00:05:26.284 ]' 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.284 02:31:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.284 00:05:26.284 real 0m0.297s 00:05:26.284 user 0m0.186s 00:05:26.284 sys 0m0.049s 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:26.284 02:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.284 ************************************ 00:05:26.284 END TEST rpc_integrity 00:05:26.284 ************************************ 00:05:26.543 02:31:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.543 02:31:29 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:26.543 02:31:29 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:26.543 02:31:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.543 ************************************ 00:05:26.543 START TEST rpc_plugins 00:05:26.543 ************************************ 00:05:26.543 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:05:26.543 02:31:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.543 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.543 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.543 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.543 02:31:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.543 02:31:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.543 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.543 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.543 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.543 02:31:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.543 { 00:05:26.543 "name": "Malloc1", 00:05:26.543 "aliases": [ 00:05:26.543 "29393947-4566-44dd-90c6-206c19c03cfc" 00:05:26.543 ], 00:05:26.543 "product_name": "Malloc disk", 00:05:26.543 "block_size": 4096, 00:05:26.543 "num_blocks": 256, 00:05:26.543 "uuid": "29393947-4566-44dd-90c6-206c19c03cfc", 00:05:26.543 "assigned_rate_limits": { 00:05:26.543 "rw_ios_per_sec": 0, 00:05:26.543 "rw_mbytes_per_sec": 0, 00:05:26.543 "r_mbytes_per_sec": 0, 00:05:26.543 "w_mbytes_per_sec": 0 00:05:26.543 }, 00:05:26.543 "claimed": false, 00:05:26.543 "zoned": false, 00:05:26.543 "supported_io_types": { 00:05:26.543 "read": true, 00:05:26.543 "write": true, 00:05:26.544 "unmap": true, 00:05:26.544 "write_zeroes": true, 00:05:26.544 "flush": true, 00:05:26.544 "reset": true, 00:05:26.544 "compare": false, 00:05:26.544 "compare_and_write": false, 00:05:26.544 "abort": true, 00:05:26.544 "nvme_admin": false, 00:05:26.544 "nvme_io": false 00:05:26.544 }, 00:05:26.544 "memory_domains": [ 00:05:26.544 { 00:05:26.544 "dma_device_id": "system", 00:05:26.544 "dma_device_type": 1 00:05:26.544 }, 00:05:26.544 { 00:05:26.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.544 "dma_device_type": 2 00:05:26.544 } 00:05:26.544 ], 00:05:26.544 "driver_specific": {} 00:05:26.544 } 00:05:26.544 ]' 00:05:26.544 02:31:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:26.544 02:31:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.544 02:31:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.544 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.544 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.544 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.544 02:31:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.544 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.544 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.544 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.544 02:31:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.544 02:31:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:26.544 02:31:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:26.544 00:05:26.544 real 0m0.153s 00:05:26.544 user 0m0.093s 00:05:26.544 sys 0m0.027s 00:05:26.544 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:26.544 02:31:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.544 ************************************ 00:05:26.544 END TEST rpc_plugins 00:05:26.544 ************************************ 00:05:26.804 02:31:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:26.804 02:31:29 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:26.804 02:31:29 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:26.804 02:31:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.804 ************************************ 00:05:26.804 START TEST rpc_trace_cmd_test 00:05:26.804 ************************************ 00:05:26.804 02:31:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:05:26.804 02:31:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:26.804 02:31:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:26.804 02:31:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:26.804 02:31:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.804 02:31:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:26.804 02:31:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:26.804 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid658890", 00:05:26.804 "tpoint_group_mask": "0x8", 00:05:26.804 "iscsi_conn": { 00:05:26.804 "mask": "0x2", 00:05:26.804 "tpoint_mask": "0x0" 00:05:26.804 }, 00:05:26.804 "scsi": { 00:05:26.804 "mask": "0x4", 00:05:26.804 "tpoint_mask": "0x0" 00:05:26.804 }, 00:05:26.804 "bdev": { 00:05:26.804 "mask": "0x8", 00:05:26.804 "tpoint_mask": "0xffffffffffffffff" 00:05:26.804 }, 00:05:26.804 "nvmf_rdma": { 00:05:26.804 "mask": "0x10", 00:05:26.804 "tpoint_mask": "0x0" 00:05:26.804 }, 00:05:26.804 "nvmf_tcp": { 00:05:26.804 "mask": "0x20", 00:05:26.804 "tpoint_mask": "0x0" 00:05:26.805 }, 00:05:26.805 "ftl": { 00:05:26.805 "mask": "0x40", 00:05:26.805 "tpoint_mask": "0x0" 00:05:26.805 }, 00:05:26.805 "blobfs": { 00:05:26.805 "mask": "0x80", 00:05:26.805 "tpoint_mask": "0x0" 00:05:26.805 }, 00:05:26.805 "dsa": { 00:05:26.805 "mask": "0x200", 00:05:26.805 "tpoint_mask": "0x0" 00:05:26.805 }, 00:05:26.805 "thread": { 00:05:26.805 "mask": "0x400", 00:05:26.805 "tpoint_mask": "0x0" 00:05:26.805 }, 00:05:26.805 "nvme_pcie": { 00:05:26.805 "mask": "0x800", 00:05:26.805 "tpoint_mask": "0x0" 00:05:26.805 }, 00:05:26.805 "iaa": { 00:05:26.805 "mask": "0x1000", 00:05:26.805 "tpoint_mask": "0x0" 00:05:26.805 }, 00:05:26.805 "nvme_tcp": { 00:05:26.805 "mask": "0x2000", 00:05:26.805 "tpoint_mask": "0x0" 00:05:26.805 }, 00:05:26.805 "bdev_nvme": { 00:05:26.805 "mask": "0x4000", 00:05:26.805 "tpoint_mask": "0x0" 00:05:26.805 }, 00:05:26.805 "sock": { 00:05:26.805 "mask": "0x8000", 00:05:26.805 "tpoint_mask": "0x0" 00:05:26.805 } 00:05:26.805 }' 00:05:26.805 02:31:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:26.805 02:31:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:26.805 02:31:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.805 02:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.805 02:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:26.805 02:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:26.805 02:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:27.065 02:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:27.065 02:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:27.065 02:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:27.065 00:05:27.065 real 0m0.256s 00:05:27.065 user 0m0.209s 00:05:27.065 sys 0m0.040s 00:05:27.065 02:31:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:27.065 02:31:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.065 ************************************ 00:05:27.065 END TEST rpc_trace_cmd_test 00:05:27.065 ************************************ 00:05:27.065 02:31:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:27.065 02:31:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:27.065 02:31:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:27.065 02:31:30 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:27.065 02:31:30 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:27.065 02:31:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.065 ************************************ 00:05:27.065 START TEST rpc_daemon_integrity 00:05:27.065 ************************************ 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.065 { 00:05:27.065 "name": "Malloc2", 00:05:27.065 "aliases": [ 00:05:27.065 "befb5531-b851-4456-8b33-29a1c121b7a5" 00:05:27.065 ], 00:05:27.065 "product_name": "Malloc disk", 00:05:27.065 "block_size": 512, 00:05:27.065 "num_blocks": 16384, 00:05:27.065 "uuid": "befb5531-b851-4456-8b33-29a1c121b7a5", 00:05:27.065 "assigned_rate_limits": { 00:05:27.065 "rw_ios_per_sec": 0, 00:05:27.065 "rw_mbytes_per_sec": 0, 00:05:27.065 "r_mbytes_per_sec": 0, 00:05:27.065 "w_mbytes_per_sec": 0 00:05:27.065 }, 00:05:27.065 "claimed": false, 00:05:27.065 "zoned": false, 00:05:27.065 "supported_io_types": { 00:05:27.065 "read": true, 00:05:27.065 "write": true, 00:05:27.065 "unmap": true, 00:05:27.065 "write_zeroes": true, 00:05:27.065 "flush": true, 00:05:27.065 "reset": true, 00:05:27.065 "compare": false, 00:05:27.065 "compare_and_write": false, 00:05:27.065 "abort": true, 00:05:27.065 "nvme_admin": false, 00:05:27.065 "nvme_io": false 00:05:27.065 }, 00:05:27.065 "memory_domains": [ 00:05:27.065 { 00:05:27.065 "dma_device_id": "system", 00:05:27.065 "dma_device_type": 1 00:05:27.065 }, 00:05:27.065 { 00:05:27.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.065 "dma_device_type": 2 00:05:27.065 } 00:05:27.065 ], 00:05:27.065 "driver_specific": {} 00:05:27.065 } 00:05:27.065 ]' 00:05:27.065 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:27.325 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.325 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:27.325 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.326 [2024-05-15 02:31:30.389410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:27.326 [2024-05-15 02:31:30.389448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.326 [2024-05-15 02:31:30.389467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x836ca0 00:05:27.326 [2024-05-15 02:31:30.389479] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.326 [2024-05-15 02:31:30.390802] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.326 [2024-05-15 02:31:30.390829] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.326 Passthru0 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.326 { 00:05:27.326 "name": "Malloc2", 00:05:27.326 "aliases": [ 00:05:27.326 "befb5531-b851-4456-8b33-29a1c121b7a5" 00:05:27.326 ], 00:05:27.326 "product_name": "Malloc disk", 00:05:27.326 "block_size": 512, 00:05:27.326 "num_blocks": 16384, 00:05:27.326 "uuid": "befb5531-b851-4456-8b33-29a1c121b7a5", 00:05:27.326 "assigned_rate_limits": { 00:05:27.326 "rw_ios_per_sec": 0, 00:05:27.326 "rw_mbytes_per_sec": 0, 00:05:27.326 "r_mbytes_per_sec": 0, 00:05:27.326 "w_mbytes_per_sec": 0 00:05:27.326 }, 00:05:27.326 "claimed": true, 00:05:27.326 "claim_type": "exclusive_write", 00:05:27.326 "zoned": false, 00:05:27.326 "supported_io_types": { 00:05:27.326 "read": true, 00:05:27.326 "write": true, 00:05:27.326 "unmap": true, 00:05:27.326 "write_zeroes": true, 00:05:27.326 "flush": true, 00:05:27.326 "reset": true, 00:05:27.326 "compare": false, 00:05:27.326 "compare_and_write": false, 00:05:27.326 "abort": true, 00:05:27.326 "nvme_admin": false, 00:05:27.326 "nvme_io": false 00:05:27.326 }, 00:05:27.326 "memory_domains": [ 00:05:27.326 { 00:05:27.326 "dma_device_id": "system", 00:05:27.326 "dma_device_type": 1 00:05:27.326 }, 00:05:27.326 { 00:05:27.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.326 "dma_device_type": 2 00:05:27.326 } 00:05:27.326 ], 00:05:27.326 "driver_specific": {} 00:05:27.326 }, 00:05:27.326 { 00:05:27.326 "name": "Passthru0", 00:05:27.326 "aliases": [ 00:05:27.326 "f39faee2-0ce4-5ede-be4d-d2fcd1751ca6" 00:05:27.326 ], 00:05:27.326 "product_name": "passthru", 00:05:27.326 "block_size": 512, 00:05:27.326 "num_blocks": 16384, 00:05:27.326 "uuid": "f39faee2-0ce4-5ede-be4d-d2fcd1751ca6", 00:05:27.326 "assigned_rate_limits": { 00:05:27.326 "rw_ios_per_sec": 0, 00:05:27.326 "rw_mbytes_per_sec": 0, 00:05:27.326 "r_mbytes_per_sec": 0, 00:05:27.326 "w_mbytes_per_sec": 0 00:05:27.326 }, 00:05:27.326 "claimed": false, 00:05:27.326 "zoned": false, 00:05:27.326 "supported_io_types": { 00:05:27.326 "read": true, 00:05:27.326 "write": true, 00:05:27.326 "unmap": true, 00:05:27.326 "write_zeroes": true, 00:05:27.326 "flush": true, 00:05:27.326 "reset": true, 00:05:27.326 "compare": false, 00:05:27.326 "compare_and_write": false, 00:05:27.326 "abort": true, 00:05:27.326 "nvme_admin": false, 00:05:27.326 "nvme_io": false 00:05:27.326 }, 00:05:27.326 "memory_domains": [ 00:05:27.326 { 00:05:27.326 "dma_device_id": "system", 00:05:27.326 "dma_device_type": 1 00:05:27.326 }, 00:05:27.326 { 00:05:27.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.326 "dma_device_type": 2 00:05:27.326 } 00:05:27.326 ], 00:05:27.326 "driver_specific": { 00:05:27.326 "passthru": { 00:05:27.326 "name": "Passthru0", 00:05:27.326 "base_bdev_name": "Malloc2" 00:05:27.326 } 00:05:27.326 } 00:05:27.326 } 00:05:27.326 ]' 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.326 00:05:27.326 real 0m0.296s 00:05:27.326 user 0m0.182s 00:05:27.326 sys 0m0.051s 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:27.326 02:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.326 ************************************ 00:05:27.326 END TEST rpc_daemon_integrity 00:05:27.326 ************************************ 00:05:27.326 02:31:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.326 02:31:30 rpc -- rpc/rpc.sh@84 -- # killprocess 658890 00:05:27.326 02:31:30 rpc -- common/autotest_common.sh@947 -- # '[' -z 658890 ']' 00:05:27.326 02:31:30 rpc -- common/autotest_common.sh@951 -- # kill -0 658890 00:05:27.326 02:31:30 rpc -- common/autotest_common.sh@952 -- # uname 00:05:27.326 02:31:30 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:27.326 02:31:30 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 658890 00:05:27.586 02:31:30 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:27.586 02:31:30 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:27.586 02:31:30 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 658890' 00:05:27.586 killing process with pid 658890 00:05:27.586 02:31:30 rpc -- common/autotest_common.sh@966 -- # kill 658890 00:05:27.586 02:31:30 rpc -- common/autotest_common.sh@971 -- # wait 658890 00:05:27.845 00:05:27.845 real 0m2.296s 00:05:27.845 user 0m2.902s 00:05:27.845 sys 0m0.878s 00:05:27.845 02:31:30 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:27.845 02:31:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.845 ************************************ 00:05:27.845 END TEST rpc 00:05:27.845 ************************************ 00:05:27.845 02:31:31 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:27.845 02:31:31 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:27.845 02:31:31 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:27.845 02:31:31 -- common/autotest_common.sh@10 -- # set +x 00:05:27.845 ************************************ 00:05:27.845 START TEST skip_rpc 00:05:27.845 ************************************ 00:05:27.845 02:31:31 skip_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:28.104 * Looking for test storage... 00:05:28.104 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:28.104 02:31:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:28.104 02:31:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:28.104 02:31:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:28.104 02:31:31 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:28.104 02:31:31 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:28.104 02:31:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.104 ************************************ 00:05:28.104 START TEST skip_rpc 00:05:28.104 ************************************ 00:05:28.104 02:31:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:05:28.104 02:31:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=659308 00:05:28.104 02:31:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.104 02:31:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:28.104 02:31:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:28.104 [2024-05-15 02:31:31.301824] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:28.104 [2024-05-15 02:31:31.301892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659308 ] 00:05:28.104 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.363 [2024-05-15 02:31:31.409325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.363 [2024-05-15 02:31:31.457012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.637 02:31:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:33.637 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:33.637 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:33.637 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:33.637 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:33.637 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:33.637 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:33.637 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:33.637 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.637 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.637 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 659308 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 659308 ']' 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 659308 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 659308 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 659308' 00:05:33.638 killing process with pid 659308 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 659308 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 659308 00:05:33.638 00:05:33.638 real 0m5.405s 00:05:33.638 user 0m5.104s 00:05:33.638 sys 0m0.333s 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:33.638 02:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.638 ************************************ 00:05:33.638 END TEST skip_rpc 00:05:33.638 ************************************ 00:05:33.638 02:31:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:33.638 02:31:36 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:33.638 02:31:36 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:33.638 02:31:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.638 ************************************ 00:05:33.638 START TEST skip_rpc_with_json 00:05:33.638 ************************************ 00:05:33.638 02:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:05:33.638 02:31:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:33.638 02:31:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=660059 00:05:33.638 02:31:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.638 02:31:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.638 02:31:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 660059 00:05:33.638 02:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 660059 ']' 00:05:33.638 02:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.638 02:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:33.638 02:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.638 02:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:33.638 02:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.638 [2024-05-15 02:31:36.808606] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:33.638 [2024-05-15 02:31:36.808682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660059 ] 00:05:33.638 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.638 [2024-05-15 02:31:36.919226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.896 [2024-05-15 02:31:36.973029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.896 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:33.896 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:05:33.896 02:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:33.896 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.154 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.154 [2024-05-15 02:31:37.191678] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:34.154 request: 00:05:34.154 { 00:05:34.154 "trtype": "tcp", 00:05:34.154 "method": "nvmf_get_transports", 00:05:34.154 "req_id": 1 00:05:34.154 } 00:05:34.155 Got JSON-RPC error response 00:05:34.155 response: 00:05:34.155 { 00:05:34.155 "code": -19, 00:05:34.155 "message": "No such device" 00:05:34.155 } 00:05:34.155 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:34.155 02:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:34.155 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.155 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.155 [2024-05-15 02:31:37.203808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.155 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.155 02:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:34.155 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.155 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.155 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.155 02:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:34.155 { 00:05:34.155 "subsystems": [ 00:05:34.155 { 00:05:34.155 "subsystem": "keyring", 00:05:34.155 "config": [] 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "iobuf", 00:05:34.155 "config": [ 00:05:34.155 { 00:05:34.155 "method": "iobuf_set_options", 00:05:34.155 "params": { 00:05:34.155 "small_pool_count": 8192, 00:05:34.155 "large_pool_count": 1024, 00:05:34.155 "small_bufsize": 8192, 00:05:34.155 "large_bufsize": 135168 00:05:34.155 } 00:05:34.155 } 00:05:34.155 ] 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "sock", 00:05:34.155 "config": [ 00:05:34.155 { 00:05:34.155 "method": "sock_impl_set_options", 00:05:34.155 "params": { 00:05:34.155 "impl_name": "posix", 00:05:34.155 "recv_buf_size": 2097152, 00:05:34.155 "send_buf_size": 2097152, 00:05:34.155 "enable_recv_pipe": true, 00:05:34.155 "enable_quickack": false, 00:05:34.155 "enable_placement_id": 0, 00:05:34.155 "enable_zerocopy_send_server": true, 00:05:34.155 "enable_zerocopy_send_client": false, 00:05:34.155 "zerocopy_threshold": 0, 00:05:34.155 "tls_version": 0, 00:05:34.155 "enable_ktls": false 00:05:34.155 } 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "method": "sock_impl_set_options", 00:05:34.155 "params": { 00:05:34.155 "impl_name": "ssl", 00:05:34.155 "recv_buf_size": 4096, 00:05:34.155 "send_buf_size": 4096, 00:05:34.155 "enable_recv_pipe": true, 00:05:34.155 "enable_quickack": false, 00:05:34.155 "enable_placement_id": 0, 00:05:34.155 "enable_zerocopy_send_server": true, 00:05:34.155 "enable_zerocopy_send_client": false, 00:05:34.155 "zerocopy_threshold": 0, 00:05:34.155 "tls_version": 0, 00:05:34.155 "enable_ktls": false 00:05:34.155 } 00:05:34.155 } 00:05:34.155 ] 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "vmd", 00:05:34.155 "config": [] 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "accel", 00:05:34.155 "config": [ 00:05:34.155 { 00:05:34.155 "method": "accel_set_options", 00:05:34.155 "params": { 00:05:34.155 "small_cache_size": 128, 00:05:34.155 "large_cache_size": 16, 00:05:34.155 "task_count": 2048, 00:05:34.155 "sequence_count": 2048, 00:05:34.155 "buf_count": 2048 00:05:34.155 } 00:05:34.155 } 00:05:34.155 ] 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "bdev", 00:05:34.155 "config": [ 00:05:34.155 { 00:05:34.155 "method": "bdev_set_options", 00:05:34.155 "params": { 00:05:34.155 "bdev_io_pool_size": 65535, 00:05:34.155 "bdev_io_cache_size": 256, 00:05:34.155 "bdev_auto_examine": true, 00:05:34.155 "iobuf_small_cache_size": 128, 00:05:34.155 "iobuf_large_cache_size": 16 00:05:34.155 } 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "method": "bdev_raid_set_options", 00:05:34.155 "params": { 00:05:34.155 "process_window_size_kb": 1024 00:05:34.155 } 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "method": "bdev_iscsi_set_options", 00:05:34.155 "params": { 00:05:34.155 "timeout_sec": 30 00:05:34.155 } 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "method": "bdev_nvme_set_options", 00:05:34.155 "params": { 00:05:34.155 "action_on_timeout": "none", 00:05:34.155 "timeout_us": 0, 00:05:34.155 "timeout_admin_us": 0, 00:05:34.155 "keep_alive_timeout_ms": 10000, 00:05:34.155 "arbitration_burst": 0, 00:05:34.155 "low_priority_weight": 0, 00:05:34.155 "medium_priority_weight": 0, 00:05:34.155 "high_priority_weight": 0, 00:05:34.155 "nvme_adminq_poll_period_us": 10000, 00:05:34.155 "nvme_ioq_poll_period_us": 0, 00:05:34.155 "io_queue_requests": 0, 00:05:34.155 "delay_cmd_submit": true, 00:05:34.155 "transport_retry_count": 4, 00:05:34.155 "bdev_retry_count": 3, 00:05:34.155 "transport_ack_timeout": 0, 00:05:34.155 "ctrlr_loss_timeout_sec": 0, 00:05:34.155 "reconnect_delay_sec": 0, 00:05:34.155 "fast_io_fail_timeout_sec": 0, 00:05:34.155 "disable_auto_failback": false, 00:05:34.155 "generate_uuids": false, 00:05:34.155 "transport_tos": 0, 00:05:34.155 "nvme_error_stat": false, 00:05:34.155 "rdma_srq_size": 0, 00:05:34.155 "io_path_stat": false, 00:05:34.155 "allow_accel_sequence": false, 00:05:34.155 "rdma_max_cq_size": 0, 00:05:34.155 "rdma_cm_event_timeout_ms": 0, 00:05:34.155 "dhchap_digests": [ 00:05:34.155 "sha256", 00:05:34.155 "sha384", 00:05:34.155 "sha512" 00:05:34.155 ], 00:05:34.155 "dhchap_dhgroups": [ 00:05:34.155 "null", 00:05:34.155 "ffdhe2048", 00:05:34.155 "ffdhe3072", 00:05:34.155 "ffdhe4096", 00:05:34.155 "ffdhe6144", 00:05:34.155 "ffdhe8192" 00:05:34.155 ] 00:05:34.155 } 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "method": "bdev_nvme_set_hotplug", 00:05:34.155 "params": { 00:05:34.155 "period_us": 100000, 00:05:34.155 "enable": false 00:05:34.155 } 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "method": "bdev_wait_for_examine" 00:05:34.155 } 00:05:34.155 ] 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "scsi", 00:05:34.155 "config": null 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "scheduler", 00:05:34.155 "config": [ 00:05:34.155 { 00:05:34.155 "method": "framework_set_scheduler", 00:05:34.155 "params": { 00:05:34.155 "name": "static" 00:05:34.155 } 00:05:34.155 } 00:05:34.155 ] 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "vhost_scsi", 00:05:34.155 "config": [] 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "vhost_blk", 00:05:34.155 "config": [] 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "ublk", 00:05:34.155 "config": [] 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "nbd", 00:05:34.155 "config": [] 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "nvmf", 00:05:34.155 "config": [ 00:05:34.155 { 00:05:34.155 "method": "nvmf_set_config", 00:05:34.155 "params": { 00:05:34.155 "discovery_filter": "match_any", 00:05:34.155 "admin_cmd_passthru": { 00:05:34.155 "identify_ctrlr": false 00:05:34.155 } 00:05:34.155 } 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "method": "nvmf_set_max_subsystems", 00:05:34.155 "params": { 00:05:34.155 "max_subsystems": 1024 00:05:34.155 } 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "method": "nvmf_set_crdt", 00:05:34.155 "params": { 00:05:34.155 "crdt1": 0, 00:05:34.155 "crdt2": 0, 00:05:34.155 "crdt3": 0 00:05:34.155 } 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "method": "nvmf_create_transport", 00:05:34.155 "params": { 00:05:34.155 "trtype": "TCP", 00:05:34.155 "max_queue_depth": 128, 00:05:34.155 "max_io_qpairs_per_ctrlr": 127, 00:05:34.155 "in_capsule_data_size": 4096, 00:05:34.155 "max_io_size": 131072, 00:05:34.155 "io_unit_size": 131072, 00:05:34.155 "max_aq_depth": 128, 00:05:34.155 "num_shared_buffers": 511, 00:05:34.155 "buf_cache_size": 4294967295, 00:05:34.155 "dif_insert_or_strip": false, 00:05:34.155 "zcopy": false, 00:05:34.155 "c2h_success": true, 00:05:34.155 "sock_priority": 0, 00:05:34.155 "abort_timeout_sec": 1, 00:05:34.155 "ack_timeout": 0, 00:05:34.155 "data_wr_pool_size": 0 00:05:34.155 } 00:05:34.155 } 00:05:34.155 ] 00:05:34.155 }, 00:05:34.155 { 00:05:34.155 "subsystem": "iscsi", 00:05:34.155 "config": [ 00:05:34.155 { 00:05:34.155 "method": "iscsi_set_options", 00:05:34.155 "params": { 00:05:34.155 "node_base": "iqn.2016-06.io.spdk", 00:05:34.155 "max_sessions": 128, 00:05:34.155 "max_connections_per_session": 2, 00:05:34.155 "max_queue_depth": 64, 00:05:34.155 "default_time2wait": 2, 00:05:34.155 "default_time2retain": 20, 00:05:34.155 "first_burst_length": 8192, 00:05:34.155 "immediate_data": true, 00:05:34.155 "allow_duplicated_isid": false, 00:05:34.155 "error_recovery_level": 0, 00:05:34.155 "nop_timeout": 60, 00:05:34.155 "nop_in_interval": 30, 00:05:34.155 "disable_chap": false, 00:05:34.155 "require_chap": false, 00:05:34.155 "mutual_chap": false, 00:05:34.155 "chap_group": 0, 00:05:34.156 "max_large_datain_per_connection": 64, 00:05:34.156 "max_r2t_per_connection": 4, 00:05:34.156 "pdu_pool_size": 36864, 00:05:34.156 "immediate_data_pool_size": 16384, 00:05:34.156 "data_out_pool_size": 2048 00:05:34.156 } 00:05:34.156 } 00:05:34.156 ] 00:05:34.156 } 00:05:34.156 ] 00:05:34.156 } 00:05:34.156 02:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:34.156 02:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 660059 00:05:34.156 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 660059 ']' 00:05:34.156 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 660059 00:05:34.156 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:05:34.156 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:34.156 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 660059 00:05:34.156 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:34.156 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:34.156 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 660059' 00:05:34.156 killing process with pid 660059 00:05:34.156 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 660059 00:05:34.156 02:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 660059 00:05:34.724 02:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=660204 00:05:34.724 02:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:34.724 02:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:40.002 02:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 660204 00:05:40.002 02:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 660204 ']' 00:05:40.002 02:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 660204 00:05:40.002 02:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:05:40.002 02:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:40.002 02:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 660204 00:05:40.002 02:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:40.002 02:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:40.002 02:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 660204' 00:05:40.002 killing process with pid 660204 00:05:40.002 02:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 660204 00:05:40.002 02:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 660204 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:40.002 00:05:40.002 real 0m6.439s 00:05:40.002 user 0m6.031s 00:05:40.002 sys 0m0.769s 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.002 ************************************ 00:05:40.002 END TEST skip_rpc_with_json 00:05:40.002 ************************************ 00:05:40.002 02:31:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:40.002 02:31:43 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:40.002 02:31:43 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:40.002 02:31:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.002 ************************************ 00:05:40.002 START TEST skip_rpc_with_delay 00:05:40.002 ************************************ 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:40.002 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.262 [2024-05-15 02:31:43.343032] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:40.262 [2024-05-15 02:31:43.343137] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:40.262 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:40.262 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:40.262 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:40.262 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:40.262 00:05:40.262 real 0m0.082s 00:05:40.262 user 0m0.039s 00:05:40.262 sys 0m0.043s 00:05:40.262 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:40.262 02:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:40.262 ************************************ 00:05:40.262 END TEST skip_rpc_with_delay 00:05:40.262 ************************************ 00:05:40.262 02:31:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:40.262 02:31:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:40.262 02:31:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:40.262 02:31:43 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:40.262 02:31:43 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:40.262 02:31:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.262 ************************************ 00:05:40.262 START TEST exit_on_failed_rpc_init 00:05:40.262 ************************************ 00:05:40.262 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:05:40.262 02:31:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=660988 00:05:40.262 02:31:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 660988 00:05:40.262 02:31:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.262 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 660988 ']' 00:05:40.262 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.262 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:40.262 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.262 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:40.262 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.262 [2024-05-15 02:31:43.522052] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:40.262 [2024-05-15 02:31:43.522119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660988 ] 00:05:40.521 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.521 [2024-05-15 02:31:43.631034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.521 [2024-05-15 02:31:43.679562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:40.781 02:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:40.781 [2024-05-15 02:31:43.967073] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:40.781 [2024-05-15 02:31:43.967145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661161 ] 00:05:40.781 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.781 [2024-05-15 02:31:44.066020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.040 [2024-05-15 02:31:44.116118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.040 [2024-05-15 02:31:44.116203] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:41.040 [2024-05-15 02:31:44.116220] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:41.040 [2024-05-15 02:31:44.116232] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 660988 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 660988 ']' 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 660988 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 660988 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 660988' 00:05:41.040 killing process with pid 660988 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 660988 00:05:41.040 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 660988 00:05:41.609 00:05:41.609 real 0m1.148s 00:05:41.609 user 0m1.201s 00:05:41.609 sys 0m0.524s 00:05:41.609 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:41.609 02:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.609 ************************************ 00:05:41.609 END TEST exit_on_failed_rpc_init 00:05:41.609 ************************************ 00:05:41.609 02:31:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:41.609 00:05:41.609 real 0m13.574s 00:05:41.609 user 0m12.563s 00:05:41.609 sys 0m1.999s 00:05:41.609 02:31:44 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:41.609 02:31:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.609 ************************************ 00:05:41.609 END TEST skip_rpc 00:05:41.609 ************************************ 00:05:41.609 02:31:44 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:41.609 02:31:44 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:41.609 02:31:44 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:41.609 02:31:44 -- common/autotest_common.sh@10 -- # set +x 00:05:41.609 ************************************ 00:05:41.609 START TEST rpc_client 00:05:41.609 ************************************ 00:05:41.609 02:31:44 rpc_client -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:41.609 * Looking for test storage... 00:05:41.609 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:41.609 02:31:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:41.609 OK 00:05:41.609 02:31:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:41.609 00:05:41.609 real 0m0.136s 00:05:41.609 user 0m0.054s 00:05:41.609 sys 0m0.093s 00:05:41.609 02:31:44 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:41.609 02:31:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:41.609 ************************************ 00:05:41.609 END TEST rpc_client 00:05:41.609 ************************************ 00:05:41.870 02:31:44 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.870 02:31:44 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:41.870 02:31:44 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:41.870 02:31:44 -- common/autotest_common.sh@10 -- # set +x 00:05:41.870 ************************************ 00:05:41.870 START TEST json_config 00:05:41.870 ************************************ 00:05:41.870 02:31:44 json_config -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:41.870 02:31:45 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.870 02:31:45 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.870 02:31:45 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.870 02:31:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.870 02:31:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.870 02:31:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.870 02:31:45 json_config -- paths/export.sh@5 -- # export PATH 00:05:41.870 02:31:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@47 -- # : 0 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.870 02:31:45 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:41.870 INFO: JSON configuration test init 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:41.870 02:31:45 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:41.870 02:31:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:41.870 02:31:45 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:41.870 02:31:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.870 02:31:45 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:41.870 02:31:45 json_config -- json_config/common.sh@9 -- # local app=target 00:05:41.870 02:31:45 json_config -- json_config/common.sh@10 -- # shift 00:05:41.870 02:31:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.870 02:31:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.870 02:31:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.870 02:31:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.870 02:31:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.870 02:31:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=661372 00:05:41.870 02:31:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.870 Waiting for target to run... 00:05:41.870 02:31:45 json_config -- json_config/common.sh@25 -- # waitforlisten 661372 /var/tmp/spdk_tgt.sock 00:05:41.870 02:31:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:41.870 02:31:45 json_config -- common/autotest_common.sh@828 -- # '[' -z 661372 ']' 00:05:41.870 02:31:45 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.870 02:31:45 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:41.870 02:31:45 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.870 02:31:45 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:41.870 02:31:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.131 [2024-05-15 02:31:45.160746] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:42.131 [2024-05-15 02:31:45.160829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661372 ] 00:05:42.131 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.700 [2024-05-15 02:31:45.725020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.700 [2024-05-15 02:31:45.764332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.959 02:31:46 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:42.959 02:31:46 json_config -- common/autotest_common.sh@861 -- # return 0 00:05:42.959 02:31:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:42.959 00:05:42.959 02:31:46 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:42.959 02:31:46 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:42.959 02:31:46 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:42.959 02:31:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.959 02:31:46 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:42.959 02:31:46 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:42.959 02:31:46 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:42.959 02:31:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.959 02:31:46 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:42.959 02:31:46 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:42.959 02:31:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:43.528 02:31:46 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:43.528 02:31:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:43.528 02:31:46 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:43.528 02:31:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.528 02:31:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:43.528 02:31:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:43.528 02:31:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:43.528 02:31:46 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:43.528 02:31:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:43.528 02:31:46 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:43.787 02:31:46 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:43.787 02:31:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:43.787 02:31:46 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:43.787 02:31:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:43.787 02:31:46 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:43.787 02:31:46 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:43.788 02:31:46 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:43.788 02:31:46 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:43.788 02:31:46 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:43.788 02:31:46 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:43.788 02:31:46 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.788 02:31:46 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:43.788 02:31:46 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:43.788 02:31:46 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:05:43.788 02:31:46 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:43.788 02:31:46 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:05:43.788 02:31:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@296 -- # e810=() 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@297 -- # x722=() 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@298 -- # mlx=() 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:50.432 02:31:53 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:50.433 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:50.433 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:50.433 Found net devices under 0000:18:00.0: mlx_0_0 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:50.433 Found net devices under 0000:18:00.1: mlx_0_1 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@58 -- # uname 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@74 -- # ip= 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@75 -- # [[ -z '' ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@76 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@77 -- # ip link set mlx_0_0 up 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@78 -- # (( count = count + 1 )) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:50.433 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:50.433 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:05:50.433 altname enp24s0f0np0 00:05:50.433 altname ens785f0np0 00:05:50.433 inet 192.168.100.8/24 scope global mlx_0_0 00:05:50.433 valid_lft forever preferred_lft forever 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@74 -- # ip= 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@75 -- # [[ -z '' ]] 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@76 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@77 -- # ip link set mlx_0_1 up 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@78 -- # (( count = count + 1 )) 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:50.433 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:50.433 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:05:50.433 altname enp24s0f1np1 00:05:50.433 altname ens785f1np1 00:05:50.433 inet 192.168.100.9/24 scope global mlx_0_1 00:05:50.433 valid_lft forever preferred_lft forever 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@422 -- # return 0 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:50.433 02:31:53 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:05:50.693 192.168.100.9' 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:05:50.693 192.168.100.9' 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@457 -- # head -n 1 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:05:50.693 192.168.100.9' 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@458 -- # head -n 1 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:05:50.693 02:31:53 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:05:50.693 02:31:53 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:05:50.693 02:31:53 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:50.693 02:31:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:50.952 MallocForNvmf0 00:05:50.952 02:31:54 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:50.952 02:31:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:51.210 MallocForNvmf1 00:05:51.210 02:31:54 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:51.210 02:31:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:51.469 [2024-05-15 02:31:54.527489] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:51.469 [2024-05-15 02:31:54.593039] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b38ed0/0x1c66500) succeed. 00:05:51.469 [2024-05-15 02:31:54.610017] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b3b0c0/0x1be64c0) succeed. 00:05:51.469 02:31:54 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:51.469 02:31:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:51.728 02:31:54 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:51.728 02:31:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:51.988 02:31:55 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:51.988 02:31:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:52.247 02:31:55 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:52.247 02:31:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:52.506 [2024-05-15 02:31:55.620497] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:52.506 [2024-05-15 02:31:55.620861] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:52.506 02:31:55 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:52.506 02:31:55 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:52.506 02:31:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.506 02:31:55 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:52.506 02:31:55 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:52.506 02:31:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.506 02:31:55 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:52.506 02:31:55 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:52.506 02:31:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:52.765 MallocBdevForConfigChangeCheck 00:05:52.765 02:31:55 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:52.765 02:31:55 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:52.765 02:31:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.765 02:31:56 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:52.765 02:31:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.333 02:31:56 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:53.333 INFO: shutting down applications... 00:05:53.333 02:31:56 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:53.333 02:31:56 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:53.333 02:31:56 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:53.333 02:31:56 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:53.593 Calling clear_iscsi_subsystem 00:05:53.593 Calling clear_nvmf_subsystem 00:05:53.593 Calling clear_nbd_subsystem 00:05:53.593 Calling clear_ublk_subsystem 00:05:53.593 Calling clear_vhost_blk_subsystem 00:05:53.593 Calling clear_vhost_scsi_subsystem 00:05:53.593 Calling clear_bdev_subsystem 00:05:53.593 02:31:56 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:53.593 02:31:56 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:53.593 02:31:56 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:53.593 02:31:56 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.593 02:31:56 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:53.593 02:31:56 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:54.164 02:31:57 json_config -- json_config/json_config.sh@345 -- # break 00:05:54.164 02:31:57 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:54.164 02:31:57 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:54.164 02:31:57 json_config -- json_config/common.sh@31 -- # local app=target 00:05:54.164 02:31:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:54.164 02:31:57 json_config -- json_config/common.sh@35 -- # [[ -n 661372 ]] 00:05:54.164 02:31:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 661372 00:05:54.164 [2024-05-15 02:31:57.147407] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:54.164 02:31:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:54.164 02:31:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.164 02:31:57 json_config -- json_config/common.sh@41 -- # kill -0 661372 00:05:54.164 02:31:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.164 [2024-05-15 02:31:57.274381] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:05:54.424 02:31:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.424 02:31:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.424 02:31:57 json_config -- json_config/common.sh@41 -- # kill -0 661372 00:05:54.424 02:31:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:54.424 02:31:57 json_config -- json_config/common.sh@43 -- # break 00:05:54.424 02:31:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:54.424 02:31:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:54.424 SPDK target shutdown done 00:05:54.424 02:31:57 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:54.424 INFO: relaunching applications... 00:05:54.424 02:31:57 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.425 02:31:57 json_config -- json_config/common.sh@9 -- # local app=target 00:05:54.425 02:31:57 json_config -- json_config/common.sh@10 -- # shift 00:05:54.425 02:31:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:54.425 02:31:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:54.425 02:31:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:54.425 02:31:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.425 02:31:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.425 02:31:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=665144 00:05:54.425 02:31:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:54.425 Waiting for target to run... 00:05:54.425 02:31:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.425 02:31:57 json_config -- json_config/common.sh@25 -- # waitforlisten 665144 /var/tmp/spdk_tgt.sock 00:05:54.425 02:31:57 json_config -- common/autotest_common.sh@828 -- # '[' -z 665144 ']' 00:05:54.425 02:31:57 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.425 02:31:57 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:54.425 02:31:57 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.425 02:31:57 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:54.425 02:31:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.425 [2024-05-15 02:31:57.709134] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:54.425 [2024-05-15 02:31:57.709206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665144 ] 00:05:54.684 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.254 [2024-05-15 02:31:58.294009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.254 [2024-05-15 02:31:58.333712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.824 [2024-05-15 02:31:58.854975] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x291dce0/0x2a4ad00) succeed. 00:05:55.824 [2024-05-15 02:31:58.871367] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x291fed0/0x292abc0) succeed. 00:05:55.824 [2024-05-15 02:31:58.930315] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:55.824 [2024-05-15 02:31:58.930658] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:55.824 02:31:58 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:55.824 02:31:58 json_config -- common/autotest_common.sh@861 -- # return 0 00:05:55.824 02:31:58 json_config -- json_config/common.sh@26 -- # echo '' 00:05:55.824 00:05:55.824 02:31:58 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:55.824 02:31:58 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:55.824 INFO: Checking if target configuration is the same... 00:05:55.824 02:31:58 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.824 02:31:58 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:55.824 02:31:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.824 + '[' 2 -ne 2 ']' 00:05:55.824 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:55.824 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:55.824 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:55.824 +++ basename /dev/fd/62 00:05:55.824 ++ mktemp /tmp/62.XXX 00:05:55.824 + tmp_file_1=/tmp/62.1b0 00:05:55.824 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.824 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.824 + tmp_file_2=/tmp/spdk_tgt_config.json.KNZ 00:05:55.824 + ret=0 00:05:55.824 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.082 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.343 + diff -u /tmp/62.1b0 /tmp/spdk_tgt_config.json.KNZ 00:05:56.343 + echo 'INFO: JSON config files are the same' 00:05:56.343 INFO: JSON config files are the same 00:05:56.343 + rm /tmp/62.1b0 /tmp/spdk_tgt_config.json.KNZ 00:05:56.343 + exit 0 00:05:56.343 02:31:59 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:56.343 02:31:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:56.343 INFO: changing configuration and checking if this can be detected... 00:05:56.343 02:31:59 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:56.343 02:31:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:56.603 02:31:59 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.603 02:31:59 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:56.603 02:31:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.603 + '[' 2 -ne 2 ']' 00:05:56.603 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:56.603 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:56.603 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:56.603 +++ basename /dev/fd/62 00:05:56.603 ++ mktemp /tmp/62.XXX 00:05:56.603 + tmp_file_1=/tmp/62.Lo6 00:05:56.603 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.603 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:56.603 + tmp_file_2=/tmp/spdk_tgt_config.json.e0g 00:05:56.603 + ret=0 00:05:56.603 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.862 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:56.862 + diff -u /tmp/62.Lo6 /tmp/spdk_tgt_config.json.e0g 00:05:56.862 + ret=1 00:05:56.862 + echo '=== Start of file: /tmp/62.Lo6 ===' 00:05:56.862 + cat /tmp/62.Lo6 00:05:56.862 + echo '=== End of file: /tmp/62.Lo6 ===' 00:05:56.862 + echo '' 00:05:56.862 + echo '=== Start of file: /tmp/spdk_tgt_config.json.e0g ===' 00:05:56.862 + cat /tmp/spdk_tgt_config.json.e0g 00:05:56.862 + echo '=== End of file: /tmp/spdk_tgt_config.json.e0g ===' 00:05:56.862 + echo '' 00:05:56.862 + rm /tmp/62.Lo6 /tmp/spdk_tgt_config.json.e0g 00:05:56.862 + exit 1 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:56.862 INFO: configuration change detected. 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:56.862 02:32:00 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:56.862 02:32:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@317 -- # [[ -n 665144 ]] 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:56.862 02:32:00 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:56.862 02:32:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:56.862 02:32:00 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:56.862 02:32:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.862 02:32:00 json_config -- json_config/json_config.sh@323 -- # killprocess 665144 00:05:56.862 02:32:00 json_config -- common/autotest_common.sh@947 -- # '[' -z 665144 ']' 00:05:56.862 02:32:00 json_config -- common/autotest_common.sh@951 -- # kill -0 665144 00:05:56.862 02:32:00 json_config -- common/autotest_common.sh@952 -- # uname 00:05:56.862 02:32:00 json_config -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:56.862 02:32:00 json_config -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 665144 00:05:57.121 02:32:00 json_config -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:57.121 02:32:00 json_config -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:57.121 02:32:00 json_config -- common/autotest_common.sh@965 -- # echo 'killing process with pid 665144' 00:05:57.121 killing process with pid 665144 00:05:57.121 02:32:00 json_config -- common/autotest_common.sh@966 -- # kill 665144 00:05:57.121 [2024-05-15 02:32:00.167504] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:57.121 02:32:00 json_config -- common/autotest_common.sh@971 -- # wait 665144 00:05:57.121 [2024-05-15 02:32:00.295046] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:05:57.380 02:32:00 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.380 02:32:00 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:57.380 02:32:00 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:57.380 02:32:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.380 02:32:00 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:57.380 02:32:00 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:57.380 INFO: Success 00:05:57.380 02:32:00 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:57.380 02:32:00 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:57.380 02:32:00 json_config -- nvmf/common.sh@117 -- # sync 00:05:57.380 02:32:00 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:05:57.380 02:32:00 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:05:57.380 02:32:00 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:05:57.380 02:32:00 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:57.380 02:32:00 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:05:57.380 00:05:57.380 real 0m15.665s 00:05:57.380 user 0m19.436s 00:05:57.380 sys 0m7.981s 00:05:57.380 02:32:00 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:57.380 02:32:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.380 ************************************ 00:05:57.380 END TEST json_config 00:05:57.380 ************************************ 00:05:57.639 02:32:00 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.639 02:32:00 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:57.639 02:32:00 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:57.639 02:32:00 -- common/autotest_common.sh@10 -- # set +x 00:05:57.639 ************************************ 00:05:57.639 START TEST json_config_extra_key 00:05:57.639 ************************************ 00:05:57.639 02:32:00 json_config_extra_key -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.639 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.639 02:32:00 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:57.639 02:32:00 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.639 02:32:00 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.639 02:32:00 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.639 02:32:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.639 02:32:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.640 02:32:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.640 02:32:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:57.640 02:32:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.640 02:32:00 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:57.640 02:32:00 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:57.640 02:32:00 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:57.640 02:32:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.640 02:32:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.640 02:32:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.640 02:32:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:57.640 02:32:00 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:57.640 02:32:00 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.640 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:57.640 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.640 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.640 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.640 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.640 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.640 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.640 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:57.640 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.640 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.640 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.640 INFO: launching applications... 00:05:57.640 02:32:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.640 02:32:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.640 02:32:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.640 02:32:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.640 02:32:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.640 02:32:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.640 02:32:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.640 02:32:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.640 02:32:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=665648 00:05:57.640 02:32:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.640 Waiting for target to run... 00:05:57.640 02:32:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 665648 /var/tmp/spdk_tgt.sock 00:05:57.640 02:32:00 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 665648 ']' 00:05:57.640 02:32:00 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.640 02:32:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.640 02:32:00 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:57.640 02:32:00 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.640 02:32:00 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:57.640 02:32:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.640 [2024-05-15 02:32:00.897698] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:57.640 [2024-05-15 02:32:00.897775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665648 ] 00:05:57.899 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.157 [2024-05-15 02:32:01.241625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.157 [2024-05-15 02:32:01.270006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.726 02:32:01 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:58.726 02:32:01 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:05:58.726 02:32:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:58.726 00:05:58.726 02:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:58.726 INFO: shutting down applications... 00:05:58.726 02:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:58.726 02:32:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:58.726 02:32:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.726 02:32:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 665648 ]] 00:05:58.726 02:32:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 665648 00:05:58.726 02:32:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.726 02:32:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.726 02:32:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 665648 00:05:58.726 02:32:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.295 02:32:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.295 02:32:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.295 02:32:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 665648 00:05:59.295 02:32:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:59.295 02:32:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:59.295 02:32:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:59.295 02:32:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:59.295 SPDK target shutdown done 00:05:59.295 02:32:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:59.295 Success 00:05:59.295 00:05:59.295 real 0m1.608s 00:05:59.295 user 0m1.462s 00:05:59.295 sys 0m0.472s 00:05:59.295 02:32:02 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:59.295 02:32:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.295 ************************************ 00:05:59.295 END TEST json_config_extra_key 00:05:59.295 ************************************ 00:05:59.295 02:32:02 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.295 02:32:02 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:59.295 02:32:02 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:59.295 02:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:59.295 ************************************ 00:05:59.295 START TEST alias_rpc 00:05:59.295 ************************************ 00:05:59.295 02:32:02 alias_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.295 * Looking for test storage... 00:05:59.295 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:59.295 02:32:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:59.295 02:32:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=665879 00:05:59.295 02:32:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 665879 00:05:59.295 02:32:02 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 665879 ']' 00:05:59.295 02:32:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.295 02:32:02 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.295 02:32:02 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:59.295 02:32:02 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.295 02:32:02 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:59.295 02:32:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.554 [2024-05-15 02:32:02.589394] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:59.554 [2024-05-15 02:32:02.589468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665879 ] 00:05:59.554 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.554 [2024-05-15 02:32:02.697827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.554 [2024-05-15 02:32:02.745171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.813 02:32:02 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:59.813 02:32:02 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:59.813 02:32:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:00.071 02:32:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 665879 00:06:00.071 02:32:03 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 665879 ']' 00:06:00.071 02:32:03 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 665879 00:06:00.071 02:32:03 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:06:00.071 02:32:03 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:00.071 02:32:03 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 665879 00:06:00.071 02:32:03 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:00.071 02:32:03 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:00.071 02:32:03 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 665879' 00:06:00.071 killing process with pid 665879 00:06:00.071 02:32:03 alias_rpc -- common/autotest_common.sh@966 -- # kill 665879 00:06:00.071 02:32:03 alias_rpc -- common/autotest_common.sh@971 -- # wait 665879 00:06:00.638 00:06:00.638 real 0m1.219s 00:06:00.638 user 0m1.248s 00:06:00.638 sys 0m0.498s 00:06:00.638 02:32:03 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:00.638 02:32:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.638 ************************************ 00:06:00.638 END TEST alias_rpc 00:06:00.638 ************************************ 00:06:00.638 02:32:03 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:00.638 02:32:03 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.638 02:32:03 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:00.638 02:32:03 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:00.638 02:32:03 -- common/autotest_common.sh@10 -- # set +x 00:06:00.638 ************************************ 00:06:00.638 START TEST spdkcli_tcp 00:06:00.638 ************************************ 00:06:00.638 02:32:03 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.638 * Looking for test storage... 00:06:00.638 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:00.638 02:32:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:00.638 02:32:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:00.638 02:32:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:00.638 02:32:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:00.638 02:32:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:00.638 02:32:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:00.638 02:32:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:00.638 02:32:03 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:00.638 02:32:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.638 02:32:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=666114 00:06:00.638 02:32:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:00.638 02:32:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 666114 00:06:00.638 02:32:03 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 666114 ']' 00:06:00.638 02:32:03 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.638 02:32:03 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:00.638 02:32:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.638 02:32:03 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:00.638 02:32:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.638 [2024-05-15 02:32:03.910708] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:00.638 [2024-05-15 02:32:03.910770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666114 ] 00:06:00.897 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.897 [2024-05-15 02:32:04.006814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.897 [2024-05-15 02:32:04.055280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.897 [2024-05-15 02:32:04.055284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.155 02:32:04 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:01.155 02:32:04 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:06:01.155 02:32:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=666144 00:06:01.155 02:32:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:01.155 02:32:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:01.415 [ 00:06:01.415 "bdev_malloc_delete", 00:06:01.415 "bdev_malloc_create", 00:06:01.415 "bdev_null_resize", 00:06:01.415 "bdev_null_delete", 00:06:01.415 "bdev_null_create", 00:06:01.415 "bdev_nvme_cuse_unregister", 00:06:01.415 "bdev_nvme_cuse_register", 00:06:01.415 "bdev_opal_new_user", 00:06:01.415 "bdev_opal_set_lock_state", 00:06:01.415 "bdev_opal_delete", 00:06:01.415 "bdev_opal_get_info", 00:06:01.415 "bdev_opal_create", 00:06:01.415 "bdev_nvme_opal_revert", 00:06:01.415 "bdev_nvme_opal_init", 00:06:01.415 "bdev_nvme_send_cmd", 00:06:01.415 "bdev_nvme_get_path_iostat", 00:06:01.415 "bdev_nvme_get_mdns_discovery_info", 00:06:01.415 "bdev_nvme_stop_mdns_discovery", 00:06:01.415 "bdev_nvme_start_mdns_discovery", 00:06:01.415 "bdev_nvme_set_multipath_policy", 00:06:01.415 "bdev_nvme_set_preferred_path", 00:06:01.415 "bdev_nvme_get_io_paths", 00:06:01.415 "bdev_nvme_remove_error_injection", 00:06:01.415 "bdev_nvme_add_error_injection", 00:06:01.415 "bdev_nvme_get_discovery_info", 00:06:01.415 "bdev_nvme_stop_discovery", 00:06:01.415 "bdev_nvme_start_discovery", 00:06:01.415 "bdev_nvme_get_controller_health_info", 00:06:01.415 "bdev_nvme_disable_controller", 00:06:01.415 "bdev_nvme_enable_controller", 00:06:01.415 "bdev_nvme_reset_controller", 00:06:01.415 "bdev_nvme_get_transport_statistics", 00:06:01.415 "bdev_nvme_apply_firmware", 00:06:01.415 "bdev_nvme_detach_controller", 00:06:01.415 "bdev_nvme_get_controllers", 00:06:01.415 "bdev_nvme_attach_controller", 00:06:01.415 "bdev_nvme_set_hotplug", 00:06:01.415 "bdev_nvme_set_options", 00:06:01.415 "bdev_passthru_delete", 00:06:01.415 "bdev_passthru_create", 00:06:01.415 "bdev_lvol_check_shallow_copy", 00:06:01.415 "bdev_lvol_start_shallow_copy", 00:06:01.415 "bdev_lvol_grow_lvstore", 00:06:01.415 "bdev_lvol_get_lvols", 00:06:01.415 "bdev_lvol_get_lvstores", 00:06:01.415 "bdev_lvol_delete", 00:06:01.415 "bdev_lvol_set_read_only", 00:06:01.415 "bdev_lvol_resize", 00:06:01.415 "bdev_lvol_decouple_parent", 00:06:01.415 "bdev_lvol_inflate", 00:06:01.415 "bdev_lvol_rename", 00:06:01.415 "bdev_lvol_clone_bdev", 00:06:01.415 "bdev_lvol_clone", 00:06:01.415 "bdev_lvol_snapshot", 00:06:01.415 "bdev_lvol_create", 00:06:01.415 "bdev_lvol_delete_lvstore", 00:06:01.415 "bdev_lvol_rename_lvstore", 00:06:01.415 "bdev_lvol_create_lvstore", 00:06:01.415 "bdev_raid_set_options", 00:06:01.415 "bdev_raid_remove_base_bdev", 00:06:01.415 "bdev_raid_add_base_bdev", 00:06:01.415 "bdev_raid_delete", 00:06:01.415 "bdev_raid_create", 00:06:01.415 "bdev_raid_get_bdevs", 00:06:01.415 "bdev_error_inject_error", 00:06:01.415 "bdev_error_delete", 00:06:01.415 "bdev_error_create", 00:06:01.415 "bdev_split_delete", 00:06:01.415 "bdev_split_create", 00:06:01.415 "bdev_delay_delete", 00:06:01.415 "bdev_delay_create", 00:06:01.415 "bdev_delay_update_latency", 00:06:01.415 "bdev_zone_block_delete", 00:06:01.415 "bdev_zone_block_create", 00:06:01.415 "blobfs_create", 00:06:01.415 "blobfs_detect", 00:06:01.415 "blobfs_set_cache_size", 00:06:01.415 "bdev_aio_delete", 00:06:01.415 "bdev_aio_rescan", 00:06:01.415 "bdev_aio_create", 00:06:01.415 "bdev_ftl_set_property", 00:06:01.415 "bdev_ftl_get_properties", 00:06:01.415 "bdev_ftl_get_stats", 00:06:01.415 "bdev_ftl_unmap", 00:06:01.415 "bdev_ftl_unload", 00:06:01.415 "bdev_ftl_delete", 00:06:01.415 "bdev_ftl_load", 00:06:01.415 "bdev_ftl_create", 00:06:01.415 "bdev_virtio_attach_controller", 00:06:01.415 "bdev_virtio_scsi_get_devices", 00:06:01.415 "bdev_virtio_detach_controller", 00:06:01.415 "bdev_virtio_blk_set_hotplug", 00:06:01.415 "bdev_iscsi_delete", 00:06:01.415 "bdev_iscsi_create", 00:06:01.415 "bdev_iscsi_set_options", 00:06:01.415 "accel_error_inject_error", 00:06:01.415 "ioat_scan_accel_module", 00:06:01.415 "dsa_scan_accel_module", 00:06:01.415 "iaa_scan_accel_module", 00:06:01.415 "keyring_file_remove_key", 00:06:01.415 "keyring_file_add_key", 00:06:01.415 "iscsi_get_histogram", 00:06:01.415 "iscsi_enable_histogram", 00:06:01.415 "iscsi_set_options", 00:06:01.415 "iscsi_get_auth_groups", 00:06:01.415 "iscsi_auth_group_remove_secret", 00:06:01.415 "iscsi_auth_group_add_secret", 00:06:01.415 "iscsi_delete_auth_group", 00:06:01.415 "iscsi_create_auth_group", 00:06:01.415 "iscsi_set_discovery_auth", 00:06:01.415 "iscsi_get_options", 00:06:01.415 "iscsi_target_node_request_logout", 00:06:01.415 "iscsi_target_node_set_redirect", 00:06:01.415 "iscsi_target_node_set_auth", 00:06:01.415 "iscsi_target_node_add_lun", 00:06:01.415 "iscsi_get_stats", 00:06:01.415 "iscsi_get_connections", 00:06:01.415 "iscsi_portal_group_set_auth", 00:06:01.415 "iscsi_start_portal_group", 00:06:01.415 "iscsi_delete_portal_group", 00:06:01.415 "iscsi_create_portal_group", 00:06:01.415 "iscsi_get_portal_groups", 00:06:01.415 "iscsi_delete_target_node", 00:06:01.415 "iscsi_target_node_remove_pg_ig_maps", 00:06:01.415 "iscsi_target_node_add_pg_ig_maps", 00:06:01.415 "iscsi_create_target_node", 00:06:01.415 "iscsi_get_target_nodes", 00:06:01.415 "iscsi_delete_initiator_group", 00:06:01.415 "iscsi_initiator_group_remove_initiators", 00:06:01.415 "iscsi_initiator_group_add_initiators", 00:06:01.415 "iscsi_create_initiator_group", 00:06:01.415 "iscsi_get_initiator_groups", 00:06:01.415 "nvmf_set_crdt", 00:06:01.415 "nvmf_set_config", 00:06:01.415 "nvmf_set_max_subsystems", 00:06:01.415 "nvmf_stop_mdns_prr", 00:06:01.415 "nvmf_publish_mdns_prr", 00:06:01.415 "nvmf_subsystem_get_listeners", 00:06:01.415 "nvmf_subsystem_get_qpairs", 00:06:01.415 "nvmf_subsystem_get_controllers", 00:06:01.415 "nvmf_get_stats", 00:06:01.415 "nvmf_get_transports", 00:06:01.415 "nvmf_create_transport", 00:06:01.415 "nvmf_get_targets", 00:06:01.415 "nvmf_delete_target", 00:06:01.415 "nvmf_create_target", 00:06:01.415 "nvmf_subsystem_allow_any_host", 00:06:01.415 "nvmf_subsystem_remove_host", 00:06:01.415 "nvmf_subsystem_add_host", 00:06:01.415 "nvmf_ns_remove_host", 00:06:01.415 "nvmf_ns_add_host", 00:06:01.415 "nvmf_subsystem_remove_ns", 00:06:01.415 "nvmf_subsystem_add_ns", 00:06:01.415 "nvmf_subsystem_listener_set_ana_state", 00:06:01.415 "nvmf_discovery_get_referrals", 00:06:01.415 "nvmf_discovery_remove_referral", 00:06:01.415 "nvmf_discovery_add_referral", 00:06:01.415 "nvmf_subsystem_remove_listener", 00:06:01.415 "nvmf_subsystem_add_listener", 00:06:01.415 "nvmf_delete_subsystem", 00:06:01.415 "nvmf_create_subsystem", 00:06:01.415 "nvmf_get_subsystems", 00:06:01.415 "env_dpdk_get_mem_stats", 00:06:01.415 "nbd_get_disks", 00:06:01.415 "nbd_stop_disk", 00:06:01.415 "nbd_start_disk", 00:06:01.415 "ublk_recover_disk", 00:06:01.415 "ublk_get_disks", 00:06:01.415 "ublk_stop_disk", 00:06:01.415 "ublk_start_disk", 00:06:01.415 "ublk_destroy_target", 00:06:01.416 "ublk_create_target", 00:06:01.416 "virtio_blk_create_transport", 00:06:01.416 "virtio_blk_get_transports", 00:06:01.416 "vhost_controller_set_coalescing", 00:06:01.416 "vhost_get_controllers", 00:06:01.416 "vhost_delete_controller", 00:06:01.416 "vhost_create_blk_controller", 00:06:01.416 "vhost_scsi_controller_remove_target", 00:06:01.416 "vhost_scsi_controller_add_target", 00:06:01.416 "vhost_start_scsi_controller", 00:06:01.416 "vhost_create_scsi_controller", 00:06:01.416 "thread_set_cpumask", 00:06:01.416 "framework_get_scheduler", 00:06:01.416 "framework_set_scheduler", 00:06:01.416 "framework_get_reactors", 00:06:01.416 "thread_get_io_channels", 00:06:01.416 "thread_get_pollers", 00:06:01.416 "thread_get_stats", 00:06:01.416 "framework_monitor_context_switch", 00:06:01.416 "spdk_kill_instance", 00:06:01.416 "log_enable_timestamps", 00:06:01.416 "log_get_flags", 00:06:01.416 "log_clear_flag", 00:06:01.416 "log_set_flag", 00:06:01.416 "log_get_level", 00:06:01.416 "log_set_level", 00:06:01.416 "log_get_print_level", 00:06:01.416 "log_set_print_level", 00:06:01.416 "framework_enable_cpumask_locks", 00:06:01.416 "framework_disable_cpumask_locks", 00:06:01.416 "framework_wait_init", 00:06:01.416 "framework_start_init", 00:06:01.416 "scsi_get_devices", 00:06:01.416 "bdev_get_histogram", 00:06:01.416 "bdev_enable_histogram", 00:06:01.416 "bdev_set_qos_limit", 00:06:01.416 "bdev_set_qd_sampling_period", 00:06:01.416 "bdev_get_bdevs", 00:06:01.416 "bdev_reset_iostat", 00:06:01.416 "bdev_get_iostat", 00:06:01.416 "bdev_examine", 00:06:01.416 "bdev_wait_for_examine", 00:06:01.416 "bdev_set_options", 00:06:01.416 "notify_get_notifications", 00:06:01.416 "notify_get_types", 00:06:01.416 "accel_get_stats", 00:06:01.416 "accel_set_options", 00:06:01.416 "accel_set_driver", 00:06:01.416 "accel_crypto_key_destroy", 00:06:01.416 "accel_crypto_keys_get", 00:06:01.416 "accel_crypto_key_create", 00:06:01.416 "accel_assign_opc", 00:06:01.416 "accel_get_module_info", 00:06:01.416 "accel_get_opc_assignments", 00:06:01.416 "vmd_rescan", 00:06:01.416 "vmd_remove_device", 00:06:01.416 "vmd_enable", 00:06:01.416 "sock_get_default_impl", 00:06:01.416 "sock_set_default_impl", 00:06:01.416 "sock_impl_set_options", 00:06:01.416 "sock_impl_get_options", 00:06:01.416 "iobuf_get_stats", 00:06:01.416 "iobuf_set_options", 00:06:01.416 "framework_get_pci_devices", 00:06:01.416 "framework_get_config", 00:06:01.416 "framework_get_subsystems", 00:06:01.416 "trace_get_info", 00:06:01.416 "trace_get_tpoint_group_mask", 00:06:01.416 "trace_disable_tpoint_group", 00:06:01.416 "trace_enable_tpoint_group", 00:06:01.416 "trace_clear_tpoint_mask", 00:06:01.416 "trace_set_tpoint_mask", 00:06:01.416 "keyring_get_keys", 00:06:01.416 "spdk_get_version", 00:06:01.416 "rpc_get_methods" 00:06:01.416 ] 00:06:01.416 02:32:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:01.416 02:32:04 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:01.416 02:32:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.416 02:32:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:01.416 02:32:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 666114 00:06:01.416 02:32:04 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 666114 ']' 00:06:01.416 02:32:04 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 666114 00:06:01.416 02:32:04 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:06:01.416 02:32:04 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:01.416 02:32:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 666114 00:06:01.416 02:32:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:01.416 02:32:04 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:01.416 02:32:04 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 666114' 00:06:01.416 killing process with pid 666114 00:06:01.416 02:32:04 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 666114 00:06:01.416 02:32:04 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 666114 00:06:01.984 00:06:01.984 real 0m1.235s 00:06:01.984 user 0m2.117s 00:06:01.984 sys 0m0.525s 00:06:01.984 02:32:04 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:01.984 02:32:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.984 ************************************ 00:06:01.984 END TEST spdkcli_tcp 00:06:01.984 ************************************ 00:06:01.984 02:32:05 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.984 02:32:05 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:01.984 02:32:05 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:01.984 02:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:01.984 ************************************ 00:06:01.984 START TEST dpdk_mem_utility 00:06:01.984 ************************************ 00:06:01.984 02:32:05 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.984 * Looking for test storage... 00:06:01.984 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:01.984 02:32:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.984 02:32:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=666365 00:06:01.984 02:32:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.984 02:32:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 666365 00:06:01.984 02:32:05 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 666365 ']' 00:06:01.984 02:32:05 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.984 02:32:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:01.984 02:32:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.984 02:32:05 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:01.984 02:32:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.984 [2024-05-15 02:32:05.221055] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:01.985 [2024-05-15 02:32:05.221120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666365 ] 00:06:01.985 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.243 [2024-05-15 02:32:05.312627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.243 [2024-05-15 02:32:05.360640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.811 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:02.811 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:06:02.811 02:32:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:02.811 02:32:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:02.811 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:02.811 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.070 { 00:06:03.070 "filename": "/tmp/spdk_mem_dump.txt" 00:06:03.070 } 00:06:03.070 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:03.070 02:32:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:03.070 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:03.070 1 heaps totaling size 814.000000 MiB 00:06:03.070 size: 814.000000 MiB heap id: 0 00:06:03.070 end heaps---------- 00:06:03.070 8 mempools totaling size 598.116089 MiB 00:06:03.070 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:03.070 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:03.070 size: 84.521057 MiB name: bdev_io_666365 00:06:03.070 size: 51.011292 MiB name: evtpool_666365 00:06:03.070 size: 50.003479 MiB name: msgpool_666365 00:06:03.070 size: 21.763794 MiB name: PDU_Pool 00:06:03.070 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:03.070 size: 0.026123 MiB name: Session_Pool 00:06:03.070 end mempools------- 00:06:03.070 6 memzones totaling size 4.142822 MiB 00:06:03.070 size: 1.000366 MiB name: RG_ring_0_666365 00:06:03.070 size: 1.000366 MiB name: RG_ring_1_666365 00:06:03.070 size: 1.000366 MiB name: RG_ring_4_666365 00:06:03.070 size: 1.000366 MiB name: RG_ring_5_666365 00:06:03.070 size: 0.125366 MiB name: RG_ring_2_666365 00:06:03.070 size: 0.015991 MiB name: RG_ring_3_666365 00:06:03.070 end memzones------- 00:06:03.070 02:32:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:03.070 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:03.070 list of free elements. size: 12.519348 MiB 00:06:03.070 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:03.070 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:03.070 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:03.070 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:03.070 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:03.070 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:03.070 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:03.070 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:03.070 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:03.070 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:03.071 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:03.071 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:03.071 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:03.071 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:03.071 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:03.071 list of standard malloc elements. size: 199.218079 MiB 00:06:03.071 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:03.071 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:03.071 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:03.071 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:03.071 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:03.071 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:03.071 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:03.071 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:03.071 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:03.071 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:03.071 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:03.071 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:03.071 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:03.071 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:03.071 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:03.071 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:03.071 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:03.071 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:03.071 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:03.071 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:03.071 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:03.071 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:03.071 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:03.071 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:03.071 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:03.071 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:03.071 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:03.071 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:03.071 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:03.071 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:03.071 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:03.071 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:03.071 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:03.071 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:03.071 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:03.071 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:03.071 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:03.071 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:03.071 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:03.071 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:03.071 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:03.071 list of memzone associated elements. size: 602.262573 MiB 00:06:03.071 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:03.071 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:03.071 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:03.071 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:03.071 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:03.071 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_666365_0 00:06:03.071 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:03.071 associated memzone info: size: 48.002930 MiB name: MP_evtpool_666365_0 00:06:03.071 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:03.071 associated memzone info: size: 48.002930 MiB name: MP_msgpool_666365_0 00:06:03.071 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:03.071 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:03.071 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:03.071 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:03.071 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:03.071 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_666365 00:06:03.071 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:03.071 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_666365 00:06:03.071 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:03.071 associated memzone info: size: 1.007996 MiB name: MP_evtpool_666365 00:06:03.071 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:03.071 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:03.071 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:03.071 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:03.071 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:03.071 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:03.071 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:03.071 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:03.071 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:03.071 associated memzone info: size: 1.000366 MiB name: RG_ring_0_666365 00:06:03.071 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:03.071 associated memzone info: size: 1.000366 MiB name: RG_ring_1_666365 00:06:03.071 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:03.071 associated memzone info: size: 1.000366 MiB name: RG_ring_4_666365 00:06:03.071 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:03.071 associated memzone info: size: 1.000366 MiB name: RG_ring_5_666365 00:06:03.071 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:03.071 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_666365 00:06:03.071 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:03.071 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:03.071 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:03.071 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:03.071 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:03.071 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:03.071 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:03.071 associated memzone info: size: 0.125366 MiB name: RG_ring_2_666365 00:06:03.071 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:03.071 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:03.071 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:03.071 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:03.071 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:03.071 associated memzone info: size: 0.015991 MiB name: RG_ring_3_666365 00:06:03.071 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:03.071 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:03.071 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:03.071 associated memzone info: size: 0.000183 MiB name: MP_msgpool_666365 00:06:03.071 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:03.071 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_666365 00:06:03.071 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:03.071 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:03.071 02:32:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:03.071 02:32:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 666365 00:06:03.071 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 666365 ']' 00:06:03.071 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 666365 00:06:03.071 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:06:03.071 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:03.071 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 666365 00:06:03.071 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:03.071 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:03.071 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 666365' 00:06:03.071 killing process with pid 666365 00:06:03.071 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 666365 00:06:03.071 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 666365 00:06:03.639 00:06:03.639 real 0m1.585s 00:06:03.639 user 0m1.665s 00:06:03.639 sys 0m0.506s 00:06:03.639 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:03.639 02:32:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.639 ************************************ 00:06:03.639 END TEST dpdk_mem_utility 00:06:03.639 ************************************ 00:06:03.639 02:32:06 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:03.639 02:32:06 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:03.639 02:32:06 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:03.640 02:32:06 -- common/autotest_common.sh@10 -- # set +x 00:06:03.640 ************************************ 00:06:03.640 START TEST event 00:06:03.640 ************************************ 00:06:03.640 02:32:06 event -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:03.640 * Looking for test storage... 00:06:03.640 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:03.640 02:32:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:03.640 02:32:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.640 02:32:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.640 02:32:06 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:03.640 02:32:06 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:03.640 02:32:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.640 ************************************ 00:06:03.640 START TEST event_perf 00:06:03.640 ************************************ 00:06:03.640 02:32:06 event.event_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.640 Running I/O for 1 seconds...[2024-05-15 02:32:06.914127] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:03.640 [2024-05-15 02:32:06.914203] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666611 ] 00:06:03.900 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.900 [2024-05-15 02:32:07.014346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.900 [2024-05-15 02:32:07.069573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.900 [2024-05-15 02:32:07.069679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.900 [2024-05-15 02:32:07.069783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.900 [2024-05-15 02:32:07.069783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.280 Running I/O for 1 seconds... 00:06:05.280 lcore 0: 162996 00:06:05.280 lcore 1: 162995 00:06:05.280 lcore 2: 162997 00:06:05.280 lcore 3: 162996 00:06:05.280 done. 00:06:05.280 00:06:05.280 real 0m1.257s 00:06:05.280 user 0m4.131s 00:06:05.280 sys 0m0.117s 00:06:05.280 02:32:08 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:05.280 02:32:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.280 ************************************ 00:06:05.280 END TEST event_perf 00:06:05.280 ************************************ 00:06:05.280 02:32:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:05.280 02:32:08 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:05.280 02:32:08 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:05.280 02:32:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.280 ************************************ 00:06:05.280 START TEST event_reactor 00:06:05.280 ************************************ 00:06:05.280 02:32:08 event.event_reactor -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:05.280 [2024-05-15 02:32:08.269100] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:05.280 [2024-05-15 02:32:08.269186] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666818 ] 00:06:05.280 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.280 [2024-05-15 02:32:08.377535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.280 [2024-05-15 02:32:08.432336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.218 test_start 00:06:06.218 oneshot 00:06:06.218 tick 100 00:06:06.218 tick 100 00:06:06.218 tick 250 00:06:06.218 tick 100 00:06:06.218 tick 100 00:06:06.218 tick 100 00:06:06.218 tick 250 00:06:06.218 tick 500 00:06:06.218 tick 100 00:06:06.218 tick 100 00:06:06.218 tick 250 00:06:06.218 tick 100 00:06:06.218 tick 100 00:06:06.218 test_end 00:06:06.218 00:06:06.218 real 0m1.260s 00:06:06.218 user 0m1.124s 00:06:06.218 sys 0m0.130s 00:06:06.218 02:32:09 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:06.218 02:32:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:06.218 ************************************ 00:06:06.218 END TEST event_reactor 00:06:06.218 ************************************ 00:06:06.477 02:32:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.477 02:32:09 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:06.477 02:32:09 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:06.477 02:32:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.477 ************************************ 00:06:06.477 START TEST event_reactor_perf 00:06:06.477 ************************************ 00:06:06.477 02:32:09 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:06.477 [2024-05-15 02:32:09.617262] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:06.477 [2024-05-15 02:32:09.617335] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667018 ] 00:06:06.477 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.477 [2024-05-15 02:32:09.726411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.737 [2024-05-15 02:32:09.777101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.674 test_start 00:06:07.674 test_end 00:06:07.674 Performance: 323550 events per second 00:06:07.674 00:06:07.674 real 0m1.257s 00:06:07.674 user 0m1.123s 00:06:07.674 sys 0m0.127s 00:06:07.674 02:32:10 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:07.674 02:32:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.674 ************************************ 00:06:07.674 END TEST event_reactor_perf 00:06:07.674 ************************************ 00:06:07.674 02:32:10 event -- event/event.sh@49 -- # uname -s 00:06:07.674 02:32:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.674 02:32:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.675 02:32:10 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:07.675 02:32:10 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:07.675 02:32:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.675 ************************************ 00:06:07.675 START TEST event_scheduler 00:06:07.675 ************************************ 00:06:07.675 02:32:10 event.event_scheduler -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:07.936 * Looking for test storage... 00:06:07.936 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:07.936 02:32:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:07.936 02:32:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=667251 00:06:07.936 02:32:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.936 02:32:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:07.936 02:32:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 667251 00:06:07.936 02:32:11 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 667251 ']' 00:06:07.936 02:32:11 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.936 02:32:11 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:07.936 02:32:11 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.936 02:32:11 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:07.936 02:32:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.936 [2024-05-15 02:32:11.107747] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:07.936 [2024-05-15 02:32:11.107830] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667251 ] 00:06:07.936 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.936 [2024-05-15 02:32:11.219677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.227 [2024-05-15 02:32:11.271222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.227 [2024-05-15 02:32:11.271242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.227 [2024-05-15 02:32:11.271284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.227 [2024-05-15 02:32:11.271284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.227 02:32:11 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:08.228 02:32:11 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:06:08.228 02:32:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:08.228 02:32:11 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.228 02:32:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.228 POWER: Env isn't set yet! 00:06:08.228 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:08.228 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.228 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.228 POWER: Attempting to initialise PSTAT power management... 00:06:08.228 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:08.228 POWER: Initialized successfully for lcore 0 power management 00:06:08.228 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:08.228 POWER: Initialized successfully for lcore 1 power management 00:06:08.228 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:08.228 POWER: Initialized successfully for lcore 2 power management 00:06:08.228 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:08.228 POWER: Initialized successfully for lcore 3 power management 00:06:08.228 02:32:11 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.228 02:32:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:08.228 02:32:11 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.228 02:32:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.228 [2024-05-15 02:32:11.441499] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:08.228 02:32:11 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.228 02:32:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:08.228 02:32:11 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:08.228 02:32:11 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:08.228 02:32:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.228 ************************************ 00:06:08.228 START TEST scheduler_create_thread 00:06:08.228 ************************************ 00:06:08.228 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:06:08.228 02:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:08.228 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.228 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.228 2 00:06:08.228 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.228 02:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:08.228 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.228 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.228 3 00:06:08.228 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.228 02:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:08.228 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.228 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.487 4 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.487 5 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.487 6 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.487 7 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.487 8 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.487 9 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.487 10 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.487 02:32:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.747 02:32:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.747 02:32:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:08.747 02:32:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:08.747 02:32:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.747 02:32:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.684 02:32:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:09.684 02:32:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:09.684 02:32:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:09.684 02:32:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.623 02:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:10.623 02:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:10.623 02:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:10.623 02:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:10.623 02:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.561 02:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:11.561 00:06:11.561 real 0m3.230s 00:06:11.561 user 0m0.024s 00:06:11.561 sys 0m0.006s 00:06:11.561 02:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:11.561 02:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.561 ************************************ 00:06:11.561 END TEST scheduler_create_thread 00:06:11.561 ************************************ 00:06:11.561 02:32:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:11.561 02:32:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 667251 00:06:11.561 02:32:14 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 667251 ']' 00:06:11.561 02:32:14 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 667251 00:06:11.561 02:32:14 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:06:11.561 02:32:14 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:11.561 02:32:14 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 667251 00:06:11.561 02:32:14 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:06:11.561 02:32:14 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:06:11.561 02:32:14 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 667251' 00:06:11.561 killing process with pid 667251 00:06:11.561 02:32:14 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 667251 00:06:11.561 02:32:14 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 667251 00:06:11.821 [2024-05-15 02:32:15.098478] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:12.080 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:12.080 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:12.081 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:12.081 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:12.081 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:12.081 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:12.081 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:12.081 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:12.081 00:06:12.081 real 0m4.410s 00:06:12.081 user 0m7.609s 00:06:12.081 sys 0m0.473s 00:06:12.081 02:32:15 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:12.081 02:32:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.081 ************************************ 00:06:12.081 END TEST event_scheduler 00:06:12.081 ************************************ 00:06:12.340 02:32:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:12.340 02:32:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:12.340 02:32:15 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:12.340 02:32:15 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:12.340 02:32:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.340 ************************************ 00:06:12.340 START TEST app_repeat 00:06:12.340 ************************************ 00:06:12.340 02:32:15 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=667950 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 667950' 00:06:12.340 Process app_repeat pid: 667950 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:12.340 spdk_app_start Round 0 00:06:12.340 02:32:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 667950 /var/tmp/spdk-nbd.sock 00:06:12.340 02:32:15 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 667950 ']' 00:06:12.340 02:32:15 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.340 02:32:15 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:12.340 02:32:15 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.340 02:32:15 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:12.340 02:32:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.340 [2024-05-15 02:32:15.492236] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:12.340 [2024-05-15 02:32:15.492296] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667950 ] 00:06:12.340 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.340 [2024-05-15 02:32:15.589641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.599 [2024-05-15 02:32:15.642699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.599 [2024-05-15 02:32:15.642704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.599 02:32:15 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:12.599 02:32:15 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:06:12.599 02:32:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.857 Malloc0 00:06:12.857 02:32:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.116 Malloc1 00:06:13.116 02:32:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.116 02:32:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.374 /dev/nbd0 00:06:13.374 02:32:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.374 02:32:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.374 02:32:16 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:06:13.374 02:32:16 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:06:13.374 02:32:16 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:06:13.374 02:32:16 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:06:13.374 02:32:16 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:06:13.374 02:32:16 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:06:13.374 02:32:16 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:06:13.374 02:32:16 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:06:13.374 02:32:16 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.374 1+0 records in 00:06:13.374 1+0 records out 00:06:13.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251937 s, 16.3 MB/s 00:06:13.374 02:32:16 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:13.374 02:32:16 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:06:13.375 02:32:16 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:13.375 02:32:16 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:06:13.375 02:32:16 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:06:13.375 02:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.375 02:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.375 02:32:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.634 /dev/nbd1 00:06:13.634 02:32:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.634 02:32:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.634 1+0 records in 00:06:13.634 1+0 records out 00:06:13.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257275 s, 15.9 MB/s 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:06:13.634 02:32:16 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:06:13.634 02:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.634 02:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.634 02:32:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.634 02:32:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.634 02:32:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.894 { 00:06:13.894 "nbd_device": "/dev/nbd0", 00:06:13.894 "bdev_name": "Malloc0" 00:06:13.894 }, 00:06:13.894 { 00:06:13.894 "nbd_device": "/dev/nbd1", 00:06:13.894 "bdev_name": "Malloc1" 00:06:13.894 } 00:06:13.894 ]' 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.894 { 00:06:13.894 "nbd_device": "/dev/nbd0", 00:06:13.894 "bdev_name": "Malloc0" 00:06:13.894 }, 00:06:13.894 { 00:06:13.894 "nbd_device": "/dev/nbd1", 00:06:13.894 "bdev_name": "Malloc1" 00:06:13.894 } 00:06:13.894 ]' 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.894 /dev/nbd1' 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.894 /dev/nbd1' 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.894 256+0 records in 00:06:13.894 256+0 records out 00:06:13.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108018 s, 97.1 MB/s 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.894 02:32:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.153 256+0 records in 00:06:14.153 256+0 records out 00:06:14.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295725 s, 35.5 MB/s 00:06:14.153 02:32:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.153 02:32:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.153 256+0 records in 00:06:14.153 256+0 records out 00:06:14.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291077 s, 36.0 MB/s 00:06:14.153 02:32:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.153 02:32:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.153 02:32:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.153 02:32:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.154 02:32:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.413 02:32:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.413 02:32:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.413 02:32:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.413 02:32:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.413 02:32:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.413 02:32:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.413 02:32:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.413 02:32:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.413 02:32:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.413 02:32:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.673 02:32:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.673 02:32:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.673 02:32:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.673 02:32:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.673 02:32:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.673 02:32:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.673 02:32:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.673 02:32:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.673 02:32:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.673 02:32:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.673 02:32:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.932 02:32:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.932 02:32:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.932 02:32:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.932 02:32:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.932 02:32:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.932 02:32:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.932 02:32:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.933 02:32:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.933 02:32:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.933 02:32:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.933 02:32:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.933 02:32:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.933 02:32:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.192 02:32:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.452 [2024-05-15 02:32:18.548904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.452 [2024-05-15 02:32:18.597776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.452 [2024-05-15 02:32:18.597782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.452 [2024-05-15 02:32:18.649795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.452 [2024-05-15 02:32:18.649854] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.744 02:32:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.744 02:32:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:18.744 spdk_app_start Round 1 00:06:18.744 02:32:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 667950 /var/tmp/spdk-nbd.sock 00:06:18.744 02:32:21 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 667950 ']' 00:06:18.744 02:32:21 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.744 02:32:21 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:18.744 02:32:21 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.744 02:32:21 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:18.744 02:32:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.744 02:32:21 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:18.744 02:32:21 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:06:18.744 02:32:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.744 Malloc0 00:06:18.744 02:32:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.744 Malloc1 00:06:18.744 02:32:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.744 02:32:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.003 /dev/nbd0 00:06:19.003 02:32:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.004 02:32:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.004 1+0 records in 00:06:19.004 1+0 records out 00:06:19.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162565 s, 25.2 MB/s 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:06:19.004 02:32:22 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:06:19.004 02:32:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.004 02:32:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.004 02:32:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.264 /dev/nbd1 00:06:19.264 02:32:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.264 02:32:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.264 1+0 records in 00:06:19.264 1+0 records out 00:06:19.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027405 s, 14.9 MB/s 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:06:19.264 02:32:22 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:06:19.264 02:32:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.264 02:32:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.264 02:32:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.264 02:32:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.264 02:32:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.523 { 00:06:19.523 "nbd_device": "/dev/nbd0", 00:06:19.523 "bdev_name": "Malloc0" 00:06:19.523 }, 00:06:19.523 { 00:06:19.523 "nbd_device": "/dev/nbd1", 00:06:19.523 "bdev_name": "Malloc1" 00:06:19.523 } 00:06:19.523 ]' 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.523 { 00:06:19.523 "nbd_device": "/dev/nbd0", 00:06:19.523 "bdev_name": "Malloc0" 00:06:19.523 }, 00:06:19.523 { 00:06:19.523 "nbd_device": "/dev/nbd1", 00:06:19.523 "bdev_name": "Malloc1" 00:06:19.523 } 00:06:19.523 ]' 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.523 /dev/nbd1' 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.523 /dev/nbd1' 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.523 256+0 records in 00:06:19.523 256+0 records out 00:06:19.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102475 s, 102 MB/s 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.523 256+0 records in 00:06:19.523 256+0 records out 00:06:19.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187459 s, 55.9 MB/s 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.523 02:32:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.782 256+0 records in 00:06:19.782 256+0 records out 00:06:19.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192931 s, 54.3 MB/s 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.782 02:32:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.041 02:32:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.041 02:32:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.041 02:32:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.041 02:32:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.041 02:32:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.041 02:32:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.041 02:32:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.041 02:32:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.041 02:32:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.041 02:32:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.306 02:32:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.307 02:32:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.307 02:32:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.307 02:32:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.307 02:32:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.307 02:32:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.307 02:32:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.307 02:32:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.307 02:32:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.307 02:32:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.307 02:32:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.568 02:32:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.568 02:32:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.568 02:32:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.568 02:32:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.568 02:32:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.568 02:32:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.568 02:32:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.568 02:32:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.568 02:32:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.568 02:32:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.568 02:32:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.568 02:32:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.568 02:32:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.826 02:32:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.086 [2024-05-15 02:32:24.136146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.086 [2024-05-15 02:32:24.182542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.086 [2024-05-15 02:32:24.182547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.086 [2024-05-15 02:32:24.233877] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.086 [2024-05-15 02:32:24.233933] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.375 02:32:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.375 02:32:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:24.375 spdk_app_start Round 2 00:06:24.375 02:32:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 667950 /var/tmp/spdk-nbd.sock 00:06:24.375 02:32:26 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 667950 ']' 00:06:24.375 02:32:26 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.375 02:32:26 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:24.375 02:32:26 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.375 02:32:26 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:24.375 02:32:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.375 02:32:27 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:24.375 02:32:27 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:06:24.375 02:32:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.375 Malloc0 00:06:24.375 02:32:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.375 Malloc1 00:06:24.375 02:32:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.375 02:32:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.634 /dev/nbd0 00:06:24.634 02:32:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.634 02:32:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.634 1+0 records in 00:06:24.634 1+0 records out 00:06:24.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183097 s, 22.4 MB/s 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:06:24.634 02:32:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.634 02:32:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.634 02:32:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.634 /dev/nbd1 00:06:24.634 02:32:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.634 02:32:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.634 1+0 records in 00:06:24.634 1+0 records out 00:06:24.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258192 s, 15.9 MB/s 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:06:24.634 02:32:27 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:24.893 02:32:27 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:06:24.893 02:32:27 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:06:24.893 02:32:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.893 02:32:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.893 02:32:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.893 02:32:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.893 02:32:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.893 02:32:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.893 { 00:06:24.893 "nbd_device": "/dev/nbd0", 00:06:24.893 "bdev_name": "Malloc0" 00:06:24.893 }, 00:06:24.893 { 00:06:24.893 "nbd_device": "/dev/nbd1", 00:06:24.893 "bdev_name": "Malloc1" 00:06:24.893 } 00:06:24.893 ]' 00:06:24.893 02:32:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.893 { 00:06:24.893 "nbd_device": "/dev/nbd0", 00:06:24.893 "bdev_name": "Malloc0" 00:06:24.893 }, 00:06:24.893 { 00:06:24.893 "nbd_device": "/dev/nbd1", 00:06:24.893 "bdev_name": "Malloc1" 00:06:24.893 } 00:06:24.893 ]' 00:06:24.893 02:32:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.153 /dev/nbd1' 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.153 /dev/nbd1' 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.153 256+0 records in 00:06:25.153 256+0 records out 00:06:25.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109674 s, 95.6 MB/s 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.153 256+0 records in 00:06:25.153 256+0 records out 00:06:25.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204887 s, 51.2 MB/s 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.153 256+0 records in 00:06:25.153 256+0 records out 00:06:25.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200327 s, 52.3 MB/s 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.153 02:32:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.412 02:32:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.412 02:32:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.412 02:32:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.412 02:32:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.412 02:32:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.412 02:32:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.412 02:32:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.412 02:32:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.412 02:32:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.412 02:32:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.670 02:32:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.670 02:32:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.670 02:32:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.670 02:32:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.670 02:32:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.670 02:32:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.670 02:32:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.670 02:32:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.670 02:32:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.670 02:32:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.670 02:32:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.928 02:32:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.928 02:32:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.928 02:32:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.928 02:32:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.928 02:32:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.928 02:32:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.928 02:32:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.928 02:32:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.928 02:32:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.928 02:32:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.928 02:32:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.928 02:32:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.928 02:32:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.187 02:32:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.445 [2024-05-15 02:32:29.556681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.445 [2024-05-15 02:32:29.605790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.445 [2024-05-15 02:32:29.605794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.445 [2024-05-15 02:32:29.658904] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.445 [2024-05-15 02:32:29.658971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.824 02:32:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 667950 /var/tmp/spdk-nbd.sock 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 667950 ']' 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:06:29.824 02:32:32 event.app_repeat -- event/event.sh@39 -- # killprocess 667950 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 667950 ']' 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 667950 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 667950 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 667950' 00:06:29.824 killing process with pid 667950 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@966 -- # kill 667950 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@971 -- # wait 667950 00:06:29.824 spdk_app_start is called in Round 0. 00:06:29.824 Shutdown signal received, stop current app iteration 00:06:29.824 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 reinitialization... 00:06:29.824 spdk_app_start is called in Round 1. 00:06:29.824 Shutdown signal received, stop current app iteration 00:06:29.824 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 reinitialization... 00:06:29.824 spdk_app_start is called in Round 2. 00:06:29.824 Shutdown signal received, stop current app iteration 00:06:29.824 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 reinitialization... 00:06:29.824 spdk_app_start is called in Round 3. 00:06:29.824 Shutdown signal received, stop current app iteration 00:06:29.824 02:32:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:29.824 02:32:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:29.824 00:06:29.824 real 0m17.304s 00:06:29.824 user 0m37.475s 00:06:29.824 sys 0m3.614s 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:29.824 02:32:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.824 ************************************ 00:06:29.824 END TEST app_repeat 00:06:29.824 ************************************ 00:06:29.824 02:32:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:29.824 02:32:32 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:29.824 02:32:32 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:29.824 02:32:32 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:29.824 02:32:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.824 ************************************ 00:06:29.824 START TEST cpu_locks 00:06:29.824 ************************************ 00:06:29.824 02:32:32 event.cpu_locks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:29.824 * Looking for test storage... 00:06:29.824 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:29.824 02:32:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:29.824 02:32:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:29.824 02:32:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:29.824 02:32:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:29.824 02:32:32 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:29.824 02:32:32 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:29.824 02:32:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.824 ************************************ 00:06:29.824 START TEST default_locks 00:06:29.824 ************************************ 00:06:29.824 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:06:29.824 02:32:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=670437 00:06:29.824 02:32:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 670437 00:06:29.824 02:32:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.824 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 670437 ']' 00:06:29.824 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.824 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:29.824 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.824 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:29.824 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.824 [2024-05-15 02:32:33.054154] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:29.824 [2024-05-15 02:32:33.054212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670437 ] 00:06:30.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.097 [2024-05-15 02:32:33.149176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.097 [2024-05-15 02:32:33.199082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.356 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:30.356 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:06:30.356 02:32:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 670437 00:06:30.356 02:32:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 670437 00:06:30.356 02:32:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.923 lslocks: write error 00:06:30.923 02:32:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 670437 00:06:30.923 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 670437 ']' 00:06:30.923 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 670437 00:06:30.923 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:06:30.923 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:30.923 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 670437 00:06:30.923 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:30.923 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:30.923 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 670437' 00:06:30.923 killing process with pid 670437 00:06:30.923 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 670437 00:06:30.923 02:32:33 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 670437 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 670437 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 670437 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 670437 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 670437 ']' 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (670437) - No such process 00:06:31.182 ERROR: process (pid: 670437) is no longer running 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:31.182 00:06:31.182 real 0m1.330s 00:06:31.182 user 0m1.276s 00:06:31.182 sys 0m0.682s 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:31.182 02:32:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 ************************************ 00:06:31.182 END TEST default_locks 00:06:31.182 ************************************ 00:06:31.182 02:32:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:31.182 02:32:34 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:31.182 02:32:34 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:31.182 02:32:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.182 ************************************ 00:06:31.182 START TEST default_locks_via_rpc 00:06:31.182 ************************************ 00:06:31.182 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:06:31.182 02:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=670647 00:06:31.182 02:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 670647 00:06:31.182 02:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.182 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 670647 ']' 00:06:31.182 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.182 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:31.182 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.182 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:31.182 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.441 [2024-05-15 02:32:34.482719] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:31.441 [2024-05-15 02:32:34.482795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670647 ] 00:06:31.441 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.441 [2024-05-15 02:32:34.594286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.441 [2024-05-15 02:32:34.647237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 670647 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 670647 00:06:31.700 02:32:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.267 02:32:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 670647 00:06:32.267 02:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 670647 ']' 00:06:32.267 02:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 670647 00:06:32.267 02:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:06:32.267 02:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:32.267 02:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 670647 00:06:32.267 02:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:32.267 02:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:32.267 02:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 670647' 00:06:32.267 killing process with pid 670647 00:06:32.267 02:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 670647 00:06:32.267 02:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 670647 00:06:32.835 00:06:32.835 real 0m1.438s 00:06:32.835 user 0m1.381s 00:06:32.835 sys 0m0.688s 00:06:32.835 02:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:32.835 02:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.835 ************************************ 00:06:32.835 END TEST default_locks_via_rpc 00:06:32.835 ************************************ 00:06:32.835 02:32:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:32.835 02:32:35 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:32.835 02:32:35 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:32.835 02:32:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.835 ************************************ 00:06:32.835 START TEST non_locking_app_on_locked_coremask 00:06:32.835 ************************************ 00:06:32.835 02:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:06:32.835 02:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=670867 00:06:32.835 02:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.835 02:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 670867 /var/tmp/spdk.sock 00:06:32.835 02:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 670867 ']' 00:06:32.835 02:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.835 02:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:32.835 02:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.835 02:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:32.835 02:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.835 [2024-05-15 02:32:36.003392] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:32.835 [2024-05-15 02:32:36.003456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670867 ] 00:06:32.835 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.835 [2024-05-15 02:32:36.113396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.094 [2024-05-15 02:32:36.164766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.353 02:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:33.353 02:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:33.353 02:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=671034 00:06:33.353 02:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 671034 /var/tmp/spdk2.sock 00:06:33.353 02:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:33.353 02:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 671034 ']' 00:06:33.353 02:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.353 02:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:33.353 02:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.353 02:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:33.353 02:32:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.353 [2024-05-15 02:32:36.440136] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:33.353 [2024-05-15 02:32:36.440210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671034 ] 00:06:33.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.353 [2024-05-15 02:32:36.582709] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.353 [2024-05-15 02:32:36.582746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.612 [2024-05-15 02:32:36.679691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.179 02:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:34.179 02:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:34.179 02:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 670867 00:06:34.179 02:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 670867 00:06:34.179 02:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.557 lslocks: write error 00:06:35.557 02:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 670867 00:06:35.557 02:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 670867 ']' 00:06:35.557 02:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 670867 00:06:35.557 02:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:35.557 02:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:35.557 02:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 670867 00:06:35.557 02:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:35.557 02:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:35.557 02:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 670867' 00:06:35.557 killing process with pid 670867 00:06:35.557 02:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 670867 00:06:35.557 02:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 670867 00:06:36.125 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 671034 00:06:36.125 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 671034 ']' 00:06:36.125 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 671034 00:06:36.125 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:36.125 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:36.125 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 671034 00:06:36.125 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:36.125 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:36.125 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 671034' 00:06:36.125 killing process with pid 671034 00:06:36.125 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 671034 00:06:36.125 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 671034 00:06:36.693 00:06:36.693 real 0m3.809s 00:06:36.693 user 0m4.018s 00:06:36.693 sys 0m1.495s 00:06:36.693 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:36.693 02:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.693 ************************************ 00:06:36.693 END TEST non_locking_app_on_locked_coremask 00:06:36.693 ************************************ 00:06:36.693 02:32:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:36.693 02:32:39 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:36.693 02:32:39 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:36.693 02:32:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.693 ************************************ 00:06:36.693 START TEST locking_app_on_unlocked_coremask 00:06:36.693 ************************************ 00:06:36.693 02:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:06:36.693 02:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=671442 00:06:36.693 02:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 671442 /var/tmp/spdk.sock 00:06:36.693 02:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:36.693 02:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 671442 ']' 00:06:36.693 02:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.693 02:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:36.693 02:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.693 02:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:36.693 02:32:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.693 [2024-05-15 02:32:39.908563] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:36.693 [2024-05-15 02:32:39.908636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671442 ] 00:06:36.693 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.950 [2024-05-15 02:32:40.019265] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.950 [2024-05-15 02:32:40.019304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.950 [2024-05-15 02:32:40.069359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.209 02:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:37.209 02:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:37.209 02:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=671525 00:06:37.209 02:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 671525 /var/tmp/spdk2.sock 00:06:37.209 02:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:37.209 02:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 671525 ']' 00:06:37.209 02:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.209 02:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:37.209 02:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.209 02:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:37.209 02:32:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.209 [2024-05-15 02:32:40.322364] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:37.209 [2024-05-15 02:32:40.322428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671525 ] 00:06:37.209 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.209 [2024-05-15 02:32:40.452316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.467 [2024-05-15 02:32:40.547229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.034 02:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:38.034 02:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:38.034 02:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 671525 00:06:38.034 02:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 671525 00:06:38.034 02:32:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.411 lslocks: write error 00:06:39.411 02:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 671442 00:06:39.411 02:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 671442 ']' 00:06:39.411 02:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 671442 00:06:39.411 02:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:39.411 02:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:39.411 02:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 671442 00:06:39.411 02:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:39.411 02:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:39.411 02:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 671442' 00:06:39.411 killing process with pid 671442 00:06:39.411 02:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 671442 00:06:39.411 02:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 671442 00:06:39.979 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 671525 00:06:39.979 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 671525 ']' 00:06:39.979 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 671525 00:06:39.979 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:39.979 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:39.979 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 671525 00:06:39.979 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:39.979 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:39.979 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 671525' 00:06:39.979 killing process with pid 671525 00:06:39.979 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 671525 00:06:39.979 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 671525 00:06:40.546 00:06:40.546 real 0m3.715s 00:06:40.546 user 0m3.870s 00:06:40.546 sys 0m1.412s 00:06:40.546 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:40.547 02:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.547 ************************************ 00:06:40.547 END TEST locking_app_on_unlocked_coremask 00:06:40.547 ************************************ 00:06:40.547 02:32:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:40.547 02:32:43 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:40.547 02:32:43 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:40.547 02:32:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.547 ************************************ 00:06:40.547 START TEST locking_app_on_locked_coremask 00:06:40.547 ************************************ 00:06:40.547 02:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:06:40.547 02:32:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=672017 00:06:40.547 02:32:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 672017 /var/tmp/spdk.sock 00:06:40.547 02:32:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.547 02:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 672017 ']' 00:06:40.547 02:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.547 02:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:40.547 02:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.547 02:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:40.547 02:32:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.547 [2024-05-15 02:32:43.716992] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:40.547 [2024-05-15 02:32:43.717068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672017 ] 00:06:40.547 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.547 [2024-05-15 02:32:43.825740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.805 [2024-05-15 02:32:43.877830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=672030 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 672030 /var/tmp/spdk2.sock 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 672030 /var/tmp/spdk2.sock 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 672030 /var/tmp/spdk2.sock 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 672030 ']' 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:40.805 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.063 [2024-05-15 02:32:44.130223] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:41.063 [2024-05-15 02:32:44.130283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672030 ] 00:06:41.063 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.063 [2024-05-15 02:32:44.260255] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 672017 has claimed it. 00:06:41.063 [2024-05-15 02:32:44.260308] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.631 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (672030) - No such process 00:06:41.631 ERROR: process (pid: 672030) is no longer running 00:06:41.631 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:41.631 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:06:41.631 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:41.631 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:41.631 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:41.631 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:41.631 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 672017 00:06:41.631 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 672017 00:06:41.631 02:32:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.199 lslocks: write error 00:06:42.199 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 672017 00:06:42.199 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 672017 ']' 00:06:42.199 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 672017 00:06:42.199 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:42.199 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:42.199 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 672017 00:06:42.199 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:42.199 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:42.199 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 672017' 00:06:42.199 killing process with pid 672017 00:06:42.199 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 672017 00:06:42.199 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 672017 00:06:42.458 00:06:42.458 real 0m2.072s 00:06:42.458 user 0m2.189s 00:06:42.458 sys 0m0.807s 00:06:42.458 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:42.458 02:32:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.458 ************************************ 00:06:42.458 END TEST locking_app_on_locked_coremask 00:06:42.458 ************************************ 00:06:42.718 02:32:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:42.718 02:32:45 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:42.718 02:32:45 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:42.718 02:32:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.718 ************************************ 00:06:42.718 START TEST locking_overlapped_coremask 00:06:42.718 ************************************ 00:06:42.718 02:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:06:42.718 02:32:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=672348 00:06:42.718 02:32:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 672348 /var/tmp/spdk.sock 00:06:42.718 02:32:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:42.718 02:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 672348 ']' 00:06:42.718 02:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.718 02:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:42.718 02:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.718 02:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:42.718 02:32:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.718 [2024-05-15 02:32:45.882567] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:42.718 [2024-05-15 02:32:45.882631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672348 ] 00:06:42.718 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.718 [2024-05-15 02:32:45.992142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.977 [2024-05-15 02:32:46.045931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.977 [2024-05-15 02:32:46.046017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.977 [2024-05-15 02:32:46.046021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=672418 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 672418 /var/tmp/spdk2.sock 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 672418 /var/tmp/spdk2.sock 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 672418 /var/tmp/spdk2.sock 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 672418 ']' 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:43.237 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.237 [2024-05-15 02:32:46.333399] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:43.237 [2024-05-15 02:32:46.333478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672418 ] 00:06:43.237 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.237 [2024-05-15 02:32:46.451745] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 672348 has claimed it. 00:06:43.237 [2024-05-15 02:32:46.451784] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.806 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (672418) - No such process 00:06:43.806 ERROR: process (pid: 672418) is no longer running 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 672348 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 672348 ']' 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 672348 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:43.806 02:32:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 672348 00:06:43.806 02:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:43.806 02:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:43.806 02:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 672348' 00:06:43.806 killing process with pid 672348 00:06:43.806 02:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 672348 00:06:43.806 02:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 672348 00:06:44.374 00:06:44.374 real 0m1.555s 00:06:44.374 user 0m4.066s 00:06:44.374 sys 0m0.535s 00:06:44.374 02:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:44.375 02:32:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.375 ************************************ 00:06:44.375 END TEST locking_overlapped_coremask 00:06:44.375 ************************************ 00:06:44.375 02:32:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:44.375 02:32:47 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:44.375 02:32:47 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:44.375 02:32:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.375 ************************************ 00:06:44.375 START TEST locking_overlapped_coremask_via_rpc 00:06:44.375 ************************************ 00:06:44.375 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:06:44.375 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=672628 00:06:44.375 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 672628 /var/tmp/spdk.sock 00:06:44.375 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:44.375 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 672628 ']' 00:06:44.375 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.375 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:44.375 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.375 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:44.375 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.375 [2024-05-15 02:32:47.527633] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:44.375 [2024-05-15 02:32:47.527705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672628 ] 00:06:44.375 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.375 [2024-05-15 02:32:47.636699] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.375 [2024-05-15 02:32:47.636738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.633 [2024-05-15 02:32:47.688975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.633 [2024-05-15 02:32:47.689059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.633 [2024-05-15 02:32:47.689064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.633 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:44.633 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:44.633 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=672646 00:06:44.633 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 672646 /var/tmp/spdk2.sock 00:06:44.633 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:44.633 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 672646 ']' 00:06:44.633 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.633 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:44.633 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.633 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:44.633 02:32:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.892 [2024-05-15 02:32:47.956579] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:44.892 [2024-05-15 02:32:47.956638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672646 ] 00:06:44.892 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.892 [2024-05-15 02:32:48.059046] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.892 [2024-05-15 02:32:48.059079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.892 [2024-05-15 02:32:48.142026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.893 [2024-05-15 02:32:48.145941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.893 [2024-05-15 02:32:48.145941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:45.829 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.829 [2024-05-15 02:32:48.855966] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 672628 has claimed it. 00:06:45.829 request: 00:06:45.829 { 00:06:45.830 "method": "framework_enable_cpumask_locks", 00:06:45.830 "req_id": 1 00:06:45.830 } 00:06:45.830 Got JSON-RPC error response 00:06:45.830 response: 00:06:45.830 { 00:06:45.830 "code": -32603, 00:06:45.830 "message": "Failed to claim CPU core: 2" 00:06:45.830 } 00:06:45.830 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:45.830 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:45.830 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:45.830 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:45.830 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:45.830 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 672628 /var/tmp/spdk.sock 00:06:45.830 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 672628 ']' 00:06:45.830 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.830 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:45.830 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.830 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:45.830 02:32:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 672646 /var/tmp/spdk2.sock 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 672646 ']' 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.089 00:06:46.089 real 0m1.850s 00:06:46.089 user 0m0.896s 00:06:46.089 sys 0m0.196s 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:46.089 02:32:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.089 ************************************ 00:06:46.089 END TEST locking_overlapped_coremask_via_rpc 00:06:46.089 ************************************ 00:06:46.089 02:32:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:46.089 02:32:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 672628 ]] 00:06:46.089 02:32:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 672628 00:06:46.089 02:32:49 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 672628 ']' 00:06:46.089 02:32:49 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 672628 00:06:46.089 02:32:49 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:06:46.089 02:32:49 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:46.089 02:32:49 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 672628 00:06:46.348 02:32:49 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:46.348 02:32:49 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:46.348 02:32:49 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 672628' 00:06:46.348 killing process with pid 672628 00:06:46.348 02:32:49 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 672628 00:06:46.348 02:32:49 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 672628 00:06:46.607 02:32:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 672646 ]] 00:06:46.607 02:32:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 672646 00:06:46.607 02:32:49 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 672646 ']' 00:06:46.607 02:32:49 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 672646 00:06:46.607 02:32:49 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:06:46.607 02:32:49 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:46.608 02:32:49 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 672646 00:06:46.608 02:32:49 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:06:46.608 02:32:49 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:06:46.608 02:32:49 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 672646' 00:06:46.608 killing process with pid 672646 00:06:46.608 02:32:49 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 672646 00:06:46.608 02:32:49 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 672646 00:06:47.176 02:32:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.176 02:32:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:47.176 02:32:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 672628 ]] 00:06:47.176 02:32:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 672628 00:06:47.176 02:32:50 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 672628 ']' 00:06:47.176 02:32:50 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 672628 00:06:47.176 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (672628) - No such process 00:06:47.176 02:32:50 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 672628 is not found' 00:06:47.176 Process with pid 672628 is not found 00:06:47.176 02:32:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 672646 ]] 00:06:47.176 02:32:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 672646 00:06:47.176 02:32:50 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 672646 ']' 00:06:47.176 02:32:50 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 672646 00:06:47.176 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (672646) - No such process 00:06:47.176 02:32:50 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 672646 is not found' 00:06:47.176 Process with pid 672646 is not found 00:06:47.176 02:32:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.176 00:06:47.176 real 0m17.326s 00:06:47.176 user 0m28.091s 00:06:47.176 sys 0m6.985s 00:06:47.176 02:32:50 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:47.176 02:32:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.176 ************************************ 00:06:47.176 END TEST cpu_locks 00:06:47.176 ************************************ 00:06:47.176 00:06:47.176 real 0m43.487s 00:06:47.176 user 1m19.796s 00:06:47.176 sys 0m11.897s 00:06:47.176 02:32:50 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:47.176 02:32:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.176 ************************************ 00:06:47.176 END TEST event 00:06:47.176 ************************************ 00:06:47.176 02:32:50 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:47.176 02:32:50 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:47.176 02:32:50 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:47.176 02:32:50 -- common/autotest_common.sh@10 -- # set +x 00:06:47.176 ************************************ 00:06:47.176 START TEST thread 00:06:47.176 ************************************ 00:06:47.176 02:32:50 thread -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:47.176 * Looking for test storage... 00:06:47.176 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:47.176 02:32:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.176 02:32:50 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:47.176 02:32:50 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:47.176 02:32:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.176 ************************************ 00:06:47.176 START TEST thread_poller_perf 00:06:47.176 ************************************ 00:06:47.176 02:32:50 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.176 [2024-05-15 02:32:50.465135] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:47.176 [2024-05-15 02:32:50.465213] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673100 ] 00:06:47.434 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.434 [2024-05-15 02:32:50.571863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.434 [2024-05-15 02:32:50.620730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.434 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:48.845 ====================================== 00:06:48.845 busy:2312650194 (cyc) 00:06:48.845 total_run_count: 267000 00:06:48.845 tsc_hz: 2300000000 (cyc) 00:06:48.845 ====================================== 00:06:48.845 poller_cost: 8661 (cyc), 3765 (nsec) 00:06:48.845 00:06:48.845 real 0m1.252s 00:06:48.845 user 0m1.134s 00:06:48.845 sys 0m0.111s 00:06:48.845 02:32:51 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:48.845 02:32:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.845 ************************************ 00:06:48.845 END TEST thread_poller_perf 00:06:48.845 ************************************ 00:06:48.845 02:32:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.845 02:32:51 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:48.845 02:32:51 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:48.845 02:32:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.845 ************************************ 00:06:48.845 START TEST thread_poller_perf 00:06:48.845 ************************************ 00:06:48.845 02:32:51 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.845 [2024-05-15 02:32:51.803909] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:48.845 [2024-05-15 02:32:51.803974] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673303 ] 00:06:48.846 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.846 [2024-05-15 02:32:51.911313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.846 [2024-05-15 02:32:51.960252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.846 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:49.791 ====================================== 00:06:49.791 busy:2302255818 (cyc) 00:06:49.791 total_run_count: 3513000 00:06:49.791 tsc_hz: 2300000000 (cyc) 00:06:49.791 ====================================== 00:06:49.791 poller_cost: 655 (cyc), 284 (nsec) 00:06:49.791 00:06:49.791 real 0m1.248s 00:06:49.791 user 0m1.129s 00:06:49.791 sys 0m0.112s 00:06:49.791 02:32:53 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:49.791 02:32:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.791 ************************************ 00:06:49.791 END TEST thread_poller_perf 00:06:49.791 ************************************ 00:06:49.791 02:32:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:49.791 00:06:49.791 real 0m2.770s 00:06:49.791 user 0m2.365s 00:06:49.791 sys 0m0.402s 00:06:49.791 02:32:53 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:49.791 02:32:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.791 ************************************ 00:06:49.791 END TEST thread 00:06:49.791 ************************************ 00:06:50.049 02:32:53 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:50.049 02:32:53 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:50.049 02:32:53 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:50.049 02:32:53 -- common/autotest_common.sh@10 -- # set +x 00:06:50.049 ************************************ 00:06:50.049 START TEST accel 00:06:50.049 ************************************ 00:06:50.049 02:32:53 accel -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:50.049 * Looking for test storage... 00:06:50.049 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:50.049 02:32:53 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:50.049 02:32:53 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:50.049 02:32:53 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:50.049 02:32:53 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=673550 00:06:50.049 02:32:53 accel -- accel/accel.sh@63 -- # waitforlisten 673550 00:06:50.049 02:32:53 accel -- common/autotest_common.sh@828 -- # '[' -z 673550 ']' 00:06:50.049 02:32:53 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.049 02:32:53 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:50.049 02:32:53 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.049 02:32:53 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:50.049 02:32:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.049 02:32:53 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:50.049 02:32:53 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:50.049 02:32:53 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.049 02:32:53 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.049 02:32:53 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.049 02:32:53 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.049 02:32:53 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.049 02:32:53 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:50.049 02:32:53 accel -- accel/accel.sh@41 -- # jq -r . 00:06:50.049 [2024-05-15 02:32:53.329247] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:50.049 [2024-05-15 02:32:53.329321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673550 ] 00:06:50.309 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.309 [2024-05-15 02:32:53.437834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.309 [2024-05-15 02:32:53.485688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.569 02:32:53 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:50.569 02:32:53 accel -- common/autotest_common.sh@861 -- # return 0 00:06:50.569 02:32:53 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:50.569 02:32:53 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:50.569 02:32:53 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:50.569 02:32:53 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:50.569 02:32:53 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:50.569 02:32:53 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:50.569 02:32:53 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # IFS== 00:06:50.570 02:32:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:50.570 02:32:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:50.570 02:32:53 accel -- accel/accel.sh@75 -- # killprocess 673550 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@947 -- # '[' -z 673550 ']' 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@951 -- # kill -0 673550 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@952 -- # uname 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 673550 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 673550' 00:06:50.570 killing process with pid 673550 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@966 -- # kill 673550 00:06:50.570 02:32:53 accel -- common/autotest_common.sh@971 -- # wait 673550 00:06:51.137 02:32:54 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:51.137 02:32:54 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:51.137 02:32:54 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:51.137 02:32:54 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:51.137 02:32:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.137 02:32:54 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:06:51.137 02:32:54 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:51.137 02:32:54 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:51.137 02:32:54 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.137 02:32:54 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.137 02:32:54 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.137 02:32:54 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.137 02:32:54 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.137 02:32:54 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:51.137 02:32:54 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:51.137 02:32:54 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:51.137 02:32:54 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:51.137 02:32:54 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:51.137 02:32:54 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:51.137 02:32:54 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:51.137 02:32:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.137 ************************************ 00:06:51.137 START TEST accel_missing_filename 00:06:51.137 ************************************ 00:06:51.137 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:06:51.137 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:51.137 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:51.137 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:51.137 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:51.137 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:51.137 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:51.137 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:51.137 02:32:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:51.137 02:32:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:51.137 02:32:54 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.137 02:32:54 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.137 02:32:54 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.137 02:32:54 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.137 02:32:54 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.137 02:32:54 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:51.137 02:32:54 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:51.137 [2024-05-15 02:32:54.365955] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:51.137 [2024-05-15 02:32:54.366014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673767 ] 00:06:51.137 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.423 [2024-05-15 02:32:54.471176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.423 [2024-05-15 02:32:54.521350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.423 [2024-05-15 02:32:54.572972] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.423 [2024-05-15 02:32:54.645150] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:51.682 A filename is required. 00:06:51.682 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:51.682 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:51.682 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:51.682 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:51.682 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:51.682 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:51.682 00:06:51.682 real 0m0.387s 00:06:51.682 user 0m0.258s 00:06:51.682 sys 0m0.171s 00:06:51.682 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:51.682 02:32:54 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:51.682 ************************************ 00:06:51.682 END TEST accel_missing_filename 00:06:51.682 ************************************ 00:06:51.682 02:32:54 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:51.682 02:32:54 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:06:51.682 02:32:54 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:51.682 02:32:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.682 ************************************ 00:06:51.682 START TEST accel_compress_verify 00:06:51.682 ************************************ 00:06:51.682 02:32:54 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:51.682 02:32:54 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:51.682 02:32:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:51.682 02:32:54 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:51.682 02:32:54 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:51.682 02:32:54 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:51.682 02:32:54 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:51.682 02:32:54 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:51.682 02:32:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:51.682 02:32:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:51.682 02:32:54 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.682 02:32:54 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.682 02:32:54 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.682 02:32:54 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.682 02:32:54 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.682 02:32:54 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:51.682 02:32:54 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:51.682 [2024-05-15 02:32:54.831180] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:51.682 [2024-05-15 02:32:54.831237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673788 ] 00:06:51.682 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.682 [2024-05-15 02:32:54.936841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.941 [2024-05-15 02:32:54.987681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.941 [2024-05-15 02:32:55.039797] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.941 [2024-05-15 02:32:55.111474] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:51.941 00:06:51.941 Compression does not support the verify option, aborting. 00:06:51.941 02:32:55 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:51.941 02:32:55 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:51.941 02:32:55 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:51.941 02:32:55 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:51.941 02:32:55 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:51.941 02:32:55 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:51.941 00:06:51.941 real 0m0.386s 00:06:51.941 user 0m0.243s 00:06:51.941 sys 0m0.183s 00:06:51.941 02:32:55 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:51.941 02:32:55 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:51.941 ************************************ 00:06:51.941 END TEST accel_compress_verify 00:06:51.941 ************************************ 00:06:51.941 02:32:55 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:51.941 02:32:55 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:51.941 02:32:55 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:51.941 02:32:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.200 ************************************ 00:06:52.200 START TEST accel_wrong_workload 00:06:52.200 ************************************ 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:52.200 02:32:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:52.200 02:32:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:52.200 02:32:55 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.200 02:32:55 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.200 02:32:55 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.200 02:32:55 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.200 02:32:55 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.200 02:32:55 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:52.200 02:32:55 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:52.200 Unsupported workload type: foobar 00:06:52.200 [2024-05-15 02:32:55.301640] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:52.200 accel_perf options: 00:06:52.200 [-h help message] 00:06:52.200 [-q queue depth per core] 00:06:52.200 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:52.200 [-T number of threads per core 00:06:52.200 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:52.200 [-t time in seconds] 00:06:52.200 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:52.200 [ dif_verify, , dif_generate, dif_generate_copy 00:06:52.200 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:52.200 [-l for compress/decompress workloads, name of uncompressed input file 00:06:52.200 [-S for crc32c workload, use this seed value (default 0) 00:06:52.200 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:52.200 [-f for fill workload, use this BYTE value (default 255) 00:06:52.200 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:52.200 [-y verify result if this switch is on] 00:06:52.200 [-a tasks to allocate per core (default: same value as -q)] 00:06:52.200 Can be used to spread operations across a wider range of memory. 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:52.200 00:06:52.200 real 0m0.038s 00:06:52.200 user 0m0.023s 00:06:52.200 sys 0m0.015s 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:52.200 02:32:55 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:52.200 ************************************ 00:06:52.200 END TEST accel_wrong_workload 00:06:52.200 ************************************ 00:06:52.200 Error: writing output failed: Broken pipe 00:06:52.200 02:32:55 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:52.200 02:32:55 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:06:52.200 02:32:55 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:52.200 02:32:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.200 ************************************ 00:06:52.200 START TEST accel_negative_buffers 00:06:52.200 ************************************ 00:06:52.200 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:52.200 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:52.200 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:52.200 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:52.200 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:52.200 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:52.200 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:52.200 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:52.200 02:32:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:52.200 02:32:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:52.200 02:32:55 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.200 02:32:55 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.200 02:32:55 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.200 02:32:55 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.200 02:32:55 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.200 02:32:55 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:52.200 02:32:55 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:52.200 -x option must be non-negative. 00:06:52.200 [2024-05-15 02:32:55.429765] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:52.200 accel_perf options: 00:06:52.200 [-h help message] 00:06:52.200 [-q queue depth per core] 00:06:52.200 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:52.200 [-T number of threads per core 00:06:52.200 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:52.200 [-t time in seconds] 00:06:52.200 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:52.200 [ dif_verify, , dif_generate, dif_generate_copy 00:06:52.200 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:52.200 [-l for compress/decompress workloads, name of uncompressed input file 00:06:52.201 [-S for crc32c workload, use this seed value (default 0) 00:06:52.201 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:52.201 [-f for fill workload, use this BYTE value (default 255) 00:06:52.201 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:52.201 [-y verify result if this switch is on] 00:06:52.201 [-a tasks to allocate per core (default: same value as -q)] 00:06:52.201 Can be used to spread operations across a wider range of memory. 00:06:52.201 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:52.201 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:52.201 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:52.201 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:52.201 00:06:52.201 real 0m0.037s 00:06:52.201 user 0m0.018s 00:06:52.201 sys 0m0.018s 00:06:52.201 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:52.201 02:32:55 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:52.201 ************************************ 00:06:52.201 END TEST accel_negative_buffers 00:06:52.201 ************************************ 00:06:52.201 Error: writing output failed: Broken pipe 00:06:52.201 02:32:55 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:52.201 02:32:55 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:52.201 02:32:55 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:52.201 02:32:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.459 ************************************ 00:06:52.459 START TEST accel_crc32c 00:06:52.459 ************************************ 00:06:52.459 02:32:55 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:52.459 02:32:55 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:52.459 [2024-05-15 02:32:55.550024] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:52.459 [2024-05-15 02:32:55.550085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673955 ] 00:06:52.459 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.459 [2024-05-15 02:32:55.655997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.459 [2024-05-15 02:32:55.704397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.717 02:32:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:53.656 02:32:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.656 00:06:53.656 real 0m1.374s 00:06:53.656 user 0m1.220s 00:06:53.656 sys 0m0.167s 00:06:53.656 02:32:56 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:53.656 02:32:56 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:53.656 ************************************ 00:06:53.656 END TEST accel_crc32c 00:06:53.656 ************************************ 00:06:53.656 02:32:56 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:53.656 02:32:56 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:53.656 02:32:56 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:53.656 02:32:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.916 ************************************ 00:06:53.916 START TEST accel_crc32c_C2 00:06:53.916 ************************************ 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:53.916 02:32:56 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:53.916 [2024-05-15 02:32:57.015453] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:53.916 [2024-05-15 02:32:57.015515] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674203 ] 00:06:53.916 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.916 [2024-05-15 02:32:57.123489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.916 [2024-05-15 02:32:57.175489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.175 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.175 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.175 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.175 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.175 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.175 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.175 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.175 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.175 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:54.175 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.176 02:32:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.115 00:06:55.115 real 0m1.394s 00:06:55.115 user 0m1.231s 00:06:55.115 sys 0m0.178s 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:55.115 02:32:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:55.115 ************************************ 00:06:55.115 END TEST accel_crc32c_C2 00:06:55.115 ************************************ 00:06:55.375 02:32:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:55.375 02:32:58 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:55.375 02:32:58 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:55.375 02:32:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.375 ************************************ 00:06:55.375 START TEST accel_copy 00:06:55.375 ************************************ 00:06:55.375 02:32:58 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:55.375 02:32:58 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:55.375 [2024-05-15 02:32:58.501592] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:55.375 [2024-05-15 02:32:58.501660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674429 ] 00:06:55.375 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.375 [2024-05-15 02:32:58.608345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.375 [2024-05-15 02:32:58.660179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.634 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.635 02:32:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.017 02:32:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:57.018 02:32:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:32:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:32:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:32:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:57.018 02:32:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:32:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:32:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:32:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.018 02:32:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:57.018 02:32:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.018 00:06:57.018 real 0m1.395s 00:06:57.018 user 0m1.236s 00:06:57.018 sys 0m0.172s 00:06:57.018 02:32:59 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:57.018 02:32:59 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.018 ************************************ 00:06:57.018 END TEST accel_copy 00:06:57.018 ************************************ 00:06:57.018 02:32:59 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.018 02:32:59 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:57.018 02:32:59 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:57.018 02:32:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.018 ************************************ 00:06:57.018 START TEST accel_fill 00:06:57.018 ************************************ 00:06:57.018 02:32:59 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:57.018 02:32:59 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:57.018 [2024-05-15 02:32:59.982596] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:57.018 [2024-05-15 02:32:59.982655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674629 ] 00:06:57.018 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.018 [2024-05-15 02:33:00.091800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.018 [2024-05-15 02:33:00.140425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.018 02:33:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.402 02:33:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:58.403 02:33:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.403 00:06:58.403 real 0m1.393s 00:06:58.403 user 0m1.234s 00:06:58.403 sys 0m0.171s 00:06:58.403 02:33:01 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:58.403 02:33:01 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:58.403 ************************************ 00:06:58.403 END TEST accel_fill 00:06:58.403 ************************************ 00:06:58.403 02:33:01 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:58.403 02:33:01 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:58.403 02:33:01 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:58.403 02:33:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.403 ************************************ 00:06:58.403 START TEST accel_copy_crc32c 00:06:58.403 ************************************ 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:58.403 [2024-05-15 02:33:01.464957] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:58.403 [2024-05-15 02:33:01.465016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674927 ] 00:06:58.403 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.403 [2024-05-15 02:33:01.573860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.403 [2024-05-15 02:33:01.622615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.403 02:33:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.780 00:06:59.780 real 0m1.391s 00:06:59.780 user 0m1.232s 00:06:59.780 sys 0m0.172s 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:59.780 02:33:02 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:59.780 ************************************ 00:06:59.780 END TEST accel_copy_crc32c 00:06:59.780 ************************************ 00:06:59.780 02:33:02 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:59.780 02:33:02 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:59.780 02:33:02 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:59.780 02:33:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.780 ************************************ 00:06:59.780 START TEST accel_copy_crc32c_C2 00:06:59.780 ************************************ 00:06:59.780 02:33:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:59.780 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:59.781 02:33:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:59.781 [2024-05-15 02:33:02.946566] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:59.781 [2024-05-15 02:33:02.946643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675159 ] 00:06:59.781 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.781 [2024-05-15 02:33:03.056841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.040 [2024-05-15 02:33:03.110369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:33:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.420 00:07:01.420 real 0m1.403s 00:07:01.420 user 0m1.236s 00:07:01.420 sys 0m0.181s 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:01.420 02:33:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:01.420 ************************************ 00:07:01.420 END TEST accel_copy_crc32c_C2 00:07:01.420 ************************************ 00:07:01.420 02:33:04 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:01.420 02:33:04 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:07:01.420 02:33:04 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:01.420 02:33:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.420 ************************************ 00:07:01.420 START TEST accel_dualcast 00:07:01.420 ************************************ 00:07:01.420 02:33:04 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:01.420 [2024-05-15 02:33:04.438031] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:01.420 [2024-05-15 02:33:04.438093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675485 ] 00:07:01.420 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.420 [2024-05-15 02:33:04.544011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.420 [2024-05-15 02:33:04.591098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.420 02:33:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.421 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.421 02:33:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:02.799 02:33:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.799 00:07:02.799 real 0m1.371s 00:07:02.799 user 0m1.214s 00:07:02.799 sys 0m0.168s 00:07:02.799 02:33:05 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:02.799 02:33:05 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:02.799 ************************************ 00:07:02.799 END TEST accel_dualcast 00:07:02.799 ************************************ 00:07:02.799 02:33:05 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:02.799 02:33:05 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:07:02.799 02:33:05 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:02.799 02:33:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.799 ************************************ 00:07:02.799 START TEST accel_compare 00:07:02.799 ************************************ 00:07:02.799 02:33:05 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:02.799 02:33:05 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:02.799 [2024-05-15 02:33:05.896093] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:02.799 [2024-05-15 02:33:05.896155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675928 ] 00:07:02.799 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.799 [2024-05-15 02:33:06.003319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.799 [2024-05-15 02:33:06.051277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.058 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.058 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.058 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.058 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.058 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.059 02:33:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.998 02:33:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.998 02:33:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.998 02:33:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.998 02:33:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.998 02:33:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.998 02:33:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.998 02:33:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.998 02:33:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.998 02:33:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.998 02:33:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:03.999 02:33:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.999 00:07:03.999 real 0m1.371s 00:07:03.999 user 0m1.213s 00:07:03.999 sys 0m0.171s 00:07:03.999 02:33:07 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:03.999 02:33:07 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:03.999 ************************************ 00:07:03.999 END TEST accel_compare 00:07:03.999 ************************************ 00:07:03.999 02:33:07 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:03.999 02:33:07 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:07:03.999 02:33:07 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:03.999 02:33:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.258 ************************************ 00:07:04.258 START TEST accel_xor 00:07:04.258 ************************************ 00:07:04.258 02:33:07 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:04.258 02:33:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:04.258 [2024-05-15 02:33:07.357596] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:04.258 [2024-05-15 02:33:07.357657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676148 ] 00:07:04.258 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.258 [2024-05-15 02:33:07.465069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.258 [2024-05-15 02:33:07.516266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.517 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.518 02:33:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:05.452 02:33:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.452 00:07:05.452 real 0m1.393s 00:07:05.452 user 0m1.226s 00:07:05.452 sys 0m0.182s 00:07:05.452 02:33:08 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:05.452 02:33:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:05.452 ************************************ 00:07:05.452 END TEST accel_xor 00:07:05.452 ************************************ 00:07:05.712 02:33:08 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:05.712 02:33:08 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:07:05.712 02:33:08 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:05.712 02:33:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.712 ************************************ 00:07:05.712 START TEST accel_xor 00:07:05.712 ************************************ 00:07:05.712 02:33:08 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:05.712 02:33:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:05.712 [2024-05-15 02:33:08.845322] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:05.712 [2024-05-15 02:33:08.845409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676347 ] 00:07:05.712 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.712 [2024-05-15 02:33:08.950916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.712 [2024-05-15 02:33:09.001302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.971 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.971 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.971 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.972 02:33:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:07.350 02:33:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.351 02:33:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:07.351 02:33:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.351 00:07:07.351 real 0m1.393s 00:07:07.351 user 0m1.233s 00:07:07.351 sys 0m0.174s 00:07:07.351 02:33:10 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:07.351 02:33:10 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:07.351 ************************************ 00:07:07.351 END TEST accel_xor 00:07:07.351 ************************************ 00:07:07.351 02:33:10 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:07.351 02:33:10 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:07:07.351 02:33:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:07.351 02:33:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.351 ************************************ 00:07:07.351 START TEST accel_dif_verify 00:07:07.351 ************************************ 00:07:07.351 02:33:10 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:07.351 [2024-05-15 02:33:10.321328] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:07.351 [2024-05-15 02:33:10.321389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676546 ] 00:07:07.351 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.351 [2024-05-15 02:33:10.429398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.351 [2024-05-15 02:33:10.479691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.351 02:33:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:08.730 02:33:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.730 00:07:08.730 real 0m1.395s 00:07:08.730 user 0m1.238s 00:07:08.730 sys 0m0.172s 00:07:08.730 02:33:11 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:08.730 02:33:11 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:08.730 ************************************ 00:07:08.730 END TEST accel_dif_verify 00:07:08.730 ************************************ 00:07:08.730 02:33:11 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:08.730 02:33:11 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:07:08.730 02:33:11 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:08.730 02:33:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.730 ************************************ 00:07:08.730 START TEST accel_dif_generate 00:07:08.730 ************************************ 00:07:08.730 02:33:11 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:08.730 02:33:11 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:08.731 [2024-05-15 02:33:11.803942] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:08.731 [2024-05-15 02:33:11.804001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676752 ] 00:07:08.731 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.731 [2024-05-15 02:33:11.911141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.731 [2024-05-15 02:33:11.961668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.731 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.994 02:33:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:09.964 02:33:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.964 00:07:09.964 real 0m1.395s 00:07:09.964 user 0m1.230s 00:07:09.964 sys 0m0.178s 00:07:09.964 02:33:13 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:09.964 02:33:13 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:09.964 ************************************ 00:07:09.964 END TEST accel_dif_generate 00:07:09.964 ************************************ 00:07:09.964 02:33:13 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:09.964 02:33:13 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:07:09.964 02:33:13 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:09.964 02:33:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.223 ************************************ 00:07:10.223 START TEST accel_dif_generate_copy 00:07:10.223 ************************************ 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:10.223 [2024-05-15 02:33:13.295450] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:10.223 [2024-05-15 02:33:13.295544] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676959 ] 00:07:10.223 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.223 [2024-05-15 02:33:13.404662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.223 [2024-05-15 02:33:13.451446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.223 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.224 02:33:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.601 00:07:11.601 real 0m1.387s 00:07:11.601 user 0m1.231s 00:07:11.601 sys 0m0.169s 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:11.601 02:33:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:11.601 ************************************ 00:07:11.601 END TEST accel_dif_generate_copy 00:07:11.601 ************************************ 00:07:11.601 02:33:14 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:11.601 02:33:14 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:11.601 02:33:14 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:07:11.601 02:33:14 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:11.601 02:33:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.601 ************************************ 00:07:11.601 START TEST accel_comp 00:07:11.601 ************************************ 00:07:11.601 02:33:14 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:11.601 02:33:14 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:11.601 [2024-05-15 02:33:14.765832] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:11.601 [2024-05-15 02:33:14.765917] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677212 ] 00:07:11.601 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.601 [2024-05-15 02:33:14.871147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.861 [2024-05-15 02:33:14.919291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.861 02:33:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:13.240 02:33:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.240 00:07:13.240 real 0m1.377s 00:07:13.240 user 0m1.230s 00:07:13.240 sys 0m0.162s 00:07:13.240 02:33:16 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:13.240 02:33:16 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:13.240 ************************************ 00:07:13.240 END TEST accel_comp 00:07:13.240 ************************************ 00:07:13.240 02:33:16 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:13.240 02:33:16 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:07:13.240 02:33:16 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:13.240 02:33:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.240 ************************************ 00:07:13.240 START TEST accel_decomp 00:07:13.240 ************************************ 00:07:13.240 02:33:16 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:13.240 [2024-05-15 02:33:16.231915] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:13.240 [2024-05-15 02:33:16.231976] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677451 ] 00:07:13.240 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.240 [2024-05-15 02:33:16.338934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.240 [2024-05-15 02:33:16.389733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.240 02:33:16 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.241 02:33:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:14.619 02:33:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:14.620 02:33:17 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.620 02:33:17 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:14.620 02:33:17 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.620 00:07:14.620 real 0m1.393s 00:07:14.620 user 0m1.239s 00:07:14.620 sys 0m0.169s 00:07:14.620 02:33:17 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:14.620 02:33:17 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:14.620 ************************************ 00:07:14.620 END TEST accel_decomp 00:07:14.620 ************************************ 00:07:14.620 02:33:17 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:14.620 02:33:17 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:07:14.620 02:33:17 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:14.620 02:33:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.620 ************************************ 00:07:14.620 START TEST accel_decmop_full 00:07:14.620 ************************************ 00:07:14.620 02:33:17 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:14.620 02:33:17 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:14.620 [2024-05-15 02:33:17.709946] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:14.620 [2024-05-15 02:33:17.710013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677701 ] 00:07:14.620 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.620 [2024-05-15 02:33:17.816706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.620 [2024-05-15 02:33:17.866963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.879 02:33:17 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:15.819 02:33:19 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.819 00:07:15.819 real 0m1.406s 00:07:15.819 user 0m1.247s 00:07:15.819 sys 0m0.174s 00:07:15.819 02:33:19 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:15.819 02:33:19 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:15.819 ************************************ 00:07:15.819 END TEST accel_decmop_full 00:07:15.819 ************************************ 00:07:16.078 02:33:19 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.078 02:33:19 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:07:16.078 02:33:19 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:16.078 02:33:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.078 ************************************ 00:07:16.078 START TEST accel_decomp_mcore 00:07:16.078 ************************************ 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:16.078 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:16.078 [2024-05-15 02:33:19.209408] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:16.078 [2024-05-15 02:33:19.209467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677932 ] 00:07:16.078 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.078 [2024-05-15 02:33:19.315986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.338 [2024-05-15 02:33:19.370294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.338 [2024-05-15 02:33:19.370381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.338 [2024-05-15 02:33:19.370486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.338 [2024-05-15 02:33:19.370487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.338 02:33:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.718 00:07:17.718 real 0m1.412s 00:07:17.718 user 0m4.640s 00:07:17.718 sys 0m0.179s 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:17.718 02:33:20 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:17.718 ************************************ 00:07:17.718 END TEST accel_decomp_mcore 00:07:17.718 ************************************ 00:07:17.718 02:33:20 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:17.718 02:33:20 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:07:17.718 02:33:20 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:17.718 02:33:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.718 ************************************ 00:07:17.718 START TEST accel_decomp_full_mcore 00:07:17.718 ************************************ 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:17.718 [2024-05-15 02:33:20.705095] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:17.718 [2024-05-15 02:33:20.705153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678138 ] 00:07:17.718 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.718 [2024-05-15 02:33:20.813403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.718 [2024-05-15 02:33:20.867630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.718 [2024-05-15 02:33:20.867714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.718 [2024-05-15 02:33:20.867816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.718 [2024-05-15 02:33:20.867816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.718 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.719 02:33:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.097 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.098 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.098 02:33:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.098 00:07:19.098 real 0m1.424s 00:07:19.098 user 0m4.697s 00:07:19.098 sys 0m0.179s 00:07:19.098 02:33:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:19.098 02:33:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:19.098 ************************************ 00:07:19.098 END TEST accel_decomp_full_mcore 00:07:19.098 ************************************ 00:07:19.098 02:33:22 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:19.098 02:33:22 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:07:19.098 02:33:22 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:19.098 02:33:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.098 ************************************ 00:07:19.098 START TEST accel_decomp_mthread 00:07:19.098 ************************************ 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:19.098 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:19.098 [2024-05-15 02:33:22.217209] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:19.098 [2024-05-15 02:33:22.217271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678342 ] 00:07:19.098 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.098 [2024-05-15 02:33:22.325068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.098 [2024-05-15 02:33:22.374668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.357 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.358 02:33:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.296 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:20.556 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.556 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.556 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.556 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.556 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:20.556 02:33:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.556 00:07:20.556 real 0m1.397s 00:07:20.556 user 0m1.243s 00:07:20.556 sys 0m0.168s 00:07:20.556 02:33:23 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:20.556 02:33:23 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:20.556 ************************************ 00:07:20.556 END TEST accel_decomp_mthread 00:07:20.556 ************************************ 00:07:20.556 02:33:23 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.557 02:33:23 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:07:20.557 02:33:23 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:20.557 02:33:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.557 ************************************ 00:07:20.557 START TEST accel_decomp_full_mthread 00:07:20.557 ************************************ 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:20.557 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:20.557 [2024-05-15 02:33:23.701069] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:20.557 [2024-05-15 02:33:23.701127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678550 ] 00:07:20.557 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.557 [2024-05-15 02:33:23.808239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.816 [2024-05-15 02:33:23.854860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.817 02:33:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.196 00:07:22.196 real 0m1.411s 00:07:22.196 user 0m1.267s 00:07:22.196 sys 0m0.158s 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:22.196 02:33:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:22.196 ************************************ 00:07:22.196 END TEST accel_decomp_full_mthread 00:07:22.196 ************************************ 00:07:22.196 02:33:25 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:22.196 02:33:25 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:22.196 02:33:25 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:22.196 02:33:25 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:22.196 02:33:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.196 02:33:25 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:22.196 02:33:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.196 02:33:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.197 02:33:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.197 02:33:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.197 02:33:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.197 02:33:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:22.197 02:33:25 accel -- accel/accel.sh@41 -- # jq -r . 00:07:22.197 ************************************ 00:07:22.197 START TEST accel_dif_functional_tests 00:07:22.197 ************************************ 00:07:22.197 02:33:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:22.197 [2024-05-15 02:33:25.218007] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:22.197 [2024-05-15 02:33:25.218066] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678751 ] 00:07:22.197 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.197 [2024-05-15 02:33:25.324997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.197 [2024-05-15 02:33:25.378396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.197 [2024-05-15 02:33:25.378483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.197 [2024-05-15 02:33:25.378489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.197 00:07:22.197 00:07:22.197 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.197 http://cunit.sourceforge.net/ 00:07:22.197 00:07:22.197 00:07:22.197 Suite: accel_dif 00:07:22.197 Test: verify: DIF generated, GUARD check ...passed 00:07:22.197 Test: verify: DIF generated, APPTAG check ...passed 00:07:22.197 Test: verify: DIF generated, REFTAG check ...passed 00:07:22.197 Test: verify: DIF not generated, GUARD check ...[2024-05-15 02:33:25.457186] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:22.197 [2024-05-15 02:33:25.457246] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:22.197 passed 00:07:22.197 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 02:33:25.457285] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:22.197 [2024-05-15 02:33:25.457314] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:22.197 passed 00:07:22.197 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 02:33:25.457343] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:22.197 [2024-05-15 02:33:25.457369] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:22.197 passed 00:07:22.197 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:22.197 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 02:33:25.457430] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:22.197 passed 00:07:22.197 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:22.197 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:22.197 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:22.197 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 02:33:25.457579] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:22.197 passed 00:07:22.197 Test: generate copy: DIF generated, GUARD check ...passed 00:07:22.197 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:22.197 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:22.197 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:22.197 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:22.197 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:22.197 Test: generate copy: iovecs-len validate ...[2024-05-15 02:33:25.457825] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:22.197 passed 00:07:22.197 Test: generate copy: buffer alignment validate ...passed 00:07:22.197 00:07:22.197 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.197 suites 1 1 n/a 0 0 00:07:22.197 tests 20 20 20 0 0 00:07:22.197 asserts 204 204 204 0 n/a 00:07:22.197 00:07:22.197 Elapsed time = 0.000 seconds 00:07:22.456 00:07:22.456 real 0m0.479s 00:07:22.456 user 0m0.684s 00:07:22.456 sys 0m0.208s 00:07:22.456 02:33:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:22.456 02:33:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:22.456 ************************************ 00:07:22.456 END TEST accel_dif_functional_tests 00:07:22.456 ************************************ 00:07:22.456 00:07:22.456 real 0m32.532s 00:07:22.456 user 0m34.623s 00:07:22.456 sys 0m6.039s 00:07:22.456 02:33:25 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:22.456 02:33:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.456 ************************************ 00:07:22.456 END TEST accel 00:07:22.456 ************************************ 00:07:22.456 02:33:25 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:22.456 02:33:25 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:07:22.456 02:33:25 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:22.456 02:33:25 -- common/autotest_common.sh@10 -- # set +x 00:07:22.715 ************************************ 00:07:22.715 START TEST accel_rpc 00:07:22.715 ************************************ 00:07:22.715 02:33:25 accel_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:22.715 * Looking for test storage... 00:07:22.715 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:22.715 02:33:25 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:22.715 02:33:25 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=678909 00:07:22.715 02:33:25 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 678909 00:07:22.715 02:33:25 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:22.715 02:33:25 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 678909 ']' 00:07:22.715 02:33:25 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.715 02:33:25 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:22.715 02:33:25 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.715 02:33:25 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:22.715 02:33:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.715 [2024-05-15 02:33:25.949231] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:22.715 [2024-05-15 02:33:25.949315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678909 ] 00:07:22.715 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.974 [2024-05-15 02:33:26.059339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.974 [2024-05-15 02:33:26.110812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.974 02:33:26 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:22.974 02:33:26 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:07:22.974 02:33:26 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:22.974 02:33:26 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:22.974 02:33:26 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:22.974 02:33:26 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:22.974 02:33:26 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:22.974 02:33:26 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:07:22.974 02:33:26 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:22.974 02:33:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.974 ************************************ 00:07:22.974 START TEST accel_assign_opcode 00:07:22.974 ************************************ 00:07:22.974 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:07:22.974 02:33:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:22.974 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:22.974 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:22.974 [2024-05-15 02:33:26.187444] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:22.974 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:22.974 02:33:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:22.974 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:22.974 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:22.974 [2024-05-15 02:33:26.195458] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:22.974 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:22.974 02:33:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:22.974 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:22.974 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.233 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.233 02:33:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:23.233 02:33:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:23.233 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:23.233 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.233 02:33:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:23.233 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:23.233 software 00:07:23.233 00:07:23.233 real 0m0.237s 00:07:23.233 user 0m0.039s 00:07:23.233 sys 0m0.015s 00:07:23.233 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:23.233 02:33:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:23.233 ************************************ 00:07:23.233 END TEST accel_assign_opcode 00:07:23.233 ************************************ 00:07:23.233 02:33:26 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 678909 00:07:23.233 02:33:26 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 678909 ']' 00:07:23.233 02:33:26 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 678909 00:07:23.233 02:33:26 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:07:23.233 02:33:26 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:23.233 02:33:26 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 678909 00:07:23.233 02:33:26 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:23.233 02:33:26 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:23.233 02:33:26 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 678909' 00:07:23.233 killing process with pid 678909 00:07:23.233 02:33:26 accel_rpc -- common/autotest_common.sh@966 -- # kill 678909 00:07:23.233 02:33:26 accel_rpc -- common/autotest_common.sh@971 -- # wait 678909 00:07:23.801 00:07:23.801 real 0m1.076s 00:07:23.801 user 0m0.964s 00:07:23.801 sys 0m0.537s 00:07:23.801 02:33:26 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:23.801 02:33:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.801 ************************************ 00:07:23.801 END TEST accel_rpc 00:07:23.801 ************************************ 00:07:23.801 02:33:26 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:23.801 02:33:26 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:07:23.801 02:33:26 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:23.801 02:33:26 -- common/autotest_common.sh@10 -- # set +x 00:07:23.801 ************************************ 00:07:23.801 START TEST app_cmdline 00:07:23.801 ************************************ 00:07:23.801 02:33:26 app_cmdline -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:23.801 * Looking for test storage... 00:07:23.801 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:23.801 02:33:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:23.801 02:33:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=679074 00:07:23.801 02:33:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 679074 00:07:23.801 02:33:27 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:23.801 02:33:27 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 679074 ']' 00:07:23.801 02:33:27 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.801 02:33:27 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:23.801 02:33:27 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.801 02:33:27 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:23.801 02:33:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.060 [2024-05-15 02:33:27.117016] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:24.060 [2024-05-15 02:33:27.117097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid679074 ] 00:07:24.060 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.060 [2024-05-15 02:33:27.223993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.060 [2024-05-15 02:33:27.271217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.319 02:33:27 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:24.319 02:33:27 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:07:24.319 02:33:27 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:24.578 { 00:07:24.578 "version": "SPDK v24.05-pre git sha1 4506c0c36", 00:07:24.578 "fields": { 00:07:24.578 "major": 24, 00:07:24.578 "minor": 5, 00:07:24.578 "patch": 0, 00:07:24.578 "suffix": "-pre", 00:07:24.578 "commit": "4506c0c36" 00:07:24.578 } 00:07:24.578 } 00:07:24.578 02:33:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:24.578 02:33:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:24.578 02:33:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:24.578 02:33:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:24.578 02:33:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:24.578 02:33:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.578 02:33:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:24.578 02:33:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:24.578 02:33:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:24.578 02:33:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:24.578 02:33:27 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.837 request: 00:07:24.837 { 00:07:24.837 "method": "env_dpdk_get_mem_stats", 00:07:24.837 "req_id": 1 00:07:24.837 } 00:07:24.837 Got JSON-RPC error response 00:07:24.837 response: 00:07:24.837 { 00:07:24.837 "code": -32601, 00:07:24.837 "message": "Method not found" 00:07:24.837 } 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:24.837 02:33:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 679074 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 679074 ']' 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 679074 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 679074 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 679074' 00:07:24.837 killing process with pid 679074 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@966 -- # kill 679074 00:07:24.837 02:33:28 app_cmdline -- common/autotest_common.sh@971 -- # wait 679074 00:07:25.405 00:07:25.405 real 0m1.490s 00:07:25.405 user 0m1.758s 00:07:25.405 sys 0m0.555s 00:07:25.405 02:33:28 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:25.405 02:33:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.405 ************************************ 00:07:25.405 END TEST app_cmdline 00:07:25.405 ************************************ 00:07:25.405 02:33:28 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:25.405 02:33:28 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:07:25.405 02:33:28 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:25.405 02:33:28 -- common/autotest_common.sh@10 -- # set +x 00:07:25.405 ************************************ 00:07:25.405 START TEST version 00:07:25.405 ************************************ 00:07:25.405 02:33:28 version -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:25.405 * Looking for test storage... 00:07:25.405 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:25.405 02:33:28 version -- app/version.sh@17 -- # get_header_version major 00:07:25.405 02:33:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:25.405 02:33:28 version -- app/version.sh@14 -- # cut -f2 00:07:25.405 02:33:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.405 02:33:28 version -- app/version.sh@17 -- # major=24 00:07:25.405 02:33:28 version -- app/version.sh@18 -- # get_header_version minor 00:07:25.405 02:33:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:25.405 02:33:28 version -- app/version.sh@14 -- # cut -f2 00:07:25.405 02:33:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.405 02:33:28 version -- app/version.sh@18 -- # minor=5 00:07:25.405 02:33:28 version -- app/version.sh@19 -- # get_header_version patch 00:07:25.405 02:33:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:25.405 02:33:28 version -- app/version.sh@14 -- # cut -f2 00:07:25.405 02:33:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.405 02:33:28 version -- app/version.sh@19 -- # patch=0 00:07:25.405 02:33:28 version -- app/version.sh@20 -- # get_header_version suffix 00:07:25.405 02:33:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:25.405 02:33:28 version -- app/version.sh@14 -- # cut -f2 00:07:25.405 02:33:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.405 02:33:28 version -- app/version.sh@20 -- # suffix=-pre 00:07:25.405 02:33:28 version -- app/version.sh@22 -- # version=24.5 00:07:25.405 02:33:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:25.405 02:33:28 version -- app/version.sh@28 -- # version=24.5rc0 00:07:25.405 02:33:28 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:25.405 02:33:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.664 02:33:28 version -- app/version.sh@30 -- # py_version=24.5rc0 00:07:25.664 02:33:28 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:25.664 00:07:25.664 real 0m0.203s 00:07:25.664 user 0m0.107s 00:07:25.664 sys 0m0.145s 00:07:25.664 02:33:28 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:25.664 02:33:28 version -- common/autotest_common.sh@10 -- # set +x 00:07:25.664 ************************************ 00:07:25.664 END TEST version 00:07:25.664 ************************************ 00:07:25.664 02:33:28 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:25.664 02:33:28 -- spdk/autotest.sh@194 -- # uname -s 00:07:25.664 02:33:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:25.664 02:33:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:25.664 02:33:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:25.664 02:33:28 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:25.664 02:33:28 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:25.664 02:33:28 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:25.664 02:33:28 -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:25.664 02:33:28 -- common/autotest_common.sh@10 -- # set +x 00:07:25.664 02:33:28 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:25.664 02:33:28 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:25.664 02:33:28 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:07:25.664 02:33:28 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:07:25.664 02:33:28 -- spdk/autotest.sh@279 -- # '[' rdma = rdma ']' 00:07:25.664 02:33:28 -- spdk/autotest.sh@280 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:25.664 02:33:28 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:25.664 02:33:28 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:25.664 02:33:28 -- common/autotest_common.sh@10 -- # set +x 00:07:25.664 ************************************ 00:07:25.664 START TEST nvmf_rdma 00:07:25.664 ************************************ 00:07:25.665 02:33:28 nvmf_rdma -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:25.665 * Looking for test storage... 00:07:25.925 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:25.925 02:33:28 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.925 02:33:28 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.925 02:33:28 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.925 02:33:28 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.925 02:33:28 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.925 02:33:28 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.925 02:33:28 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:07:25.925 02:33:28 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:25.925 02:33:28 nvmf_rdma -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:25.925 02:33:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:25.925 02:33:28 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:25.925 02:33:29 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:25.925 02:33:29 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:25.925 02:33:29 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:25.925 02:33:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:25.925 ************************************ 00:07:25.925 START TEST nvmf_example 00:07:25.925 ************************************ 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:25.925 * Looking for test storage... 00:07:25.925 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:25.925 02:33:29 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:25.926 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:25.926 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.926 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:25.926 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:25.926 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:25.926 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.926 02:33:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.926 02:33:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.926 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:25.926 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:25.926 02:33:29 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:25.926 02:33:29 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.500 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:32.501 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:32.501 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:32.501 Found net devices under 0000:18:00.0: mlx_0_0 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:32.501 Found net devices under 0000:18:00.1: mlx_0_1 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:32.501 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:32.502 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:32.502 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:07:32.502 altname enp24s0f0np0 00:07:32.502 altname ens785f0np0 00:07:32.502 inet 192.168.100.8/24 scope global mlx_0_0 00:07:32.502 valid_lft forever preferred_lft forever 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:32.502 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:32.502 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:07:32.502 altname enp24s0f1np1 00:07:32.502 altname ens785f1np1 00:07:32.502 inet 192.168.100.9/24 scope global mlx_0_1 00:07:32.502 valid_lft forever preferred_lft forever 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:32.502 192.168.100.9' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:32.502 192.168.100.9' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:32.502 192.168.100.9' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=682362 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 682362 00:07:32.502 02:33:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@828 -- # '[' -z 682362 ']' 00:07:32.503 02:33:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.503 02:33:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:32.503 02:33:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.503 02:33:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:32.503 02:33:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.503 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.509 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:33.509 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@861 -- # return 0 00:07:33.509 02:33:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:33.509 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:33.509 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.509 02:33:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:33.509 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:33.509 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:33.769 02:33:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:33.769 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.981 Initializing NVMe Controllers 00:07:45.981 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:45.981 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:45.981 Initialization complete. Launching workers. 00:07:45.981 ======================================================== 00:07:45.981 Latency(us) 00:07:45.981 Device Information : IOPS MiB/s Average min max 00:07:45.981 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 20668.40 80.74 3095.97 766.82 15966.79 00:07:45.981 ======================================================== 00:07:45.981 Total : 20668.40 80.74 3095.97 766.82 15966.79 00:07:45.981 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:45.981 rmmod nvme_rdma 00:07:45.981 rmmod nvme_fabrics 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 682362 ']' 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 682362 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@947 -- # '[' -z 682362 ']' 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@951 -- # kill -0 682362 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # uname 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 682362 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # process_name=nvmf 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@957 -- # '[' nvmf = sudo ']' 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@965 -- # echo 'killing process with pid 682362' 00:07:45.981 killing process with pid 682362 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@966 -- # kill 682362 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@971 -- # wait 682362 00:07:45.981 [2024-05-15 02:33:48.381387] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:07:45.981 nvmf threads initialize successfully 00:07:45.981 bdev subsystem init successfully 00:07:45.981 created a nvmf target service 00:07:45.981 create targets's poll groups done 00:07:45.981 all subsystems of target started 00:07:45.981 nvmf target is running 00:07:45.981 all subsystems of target stopped 00:07:45.981 destroy targets's poll groups done 00:07:45.981 destroyed the nvmf target service 00:07:45.981 bdev subsystem finish successfully 00:07:45.981 nvmf threads destroy successfully 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.981 00:07:45.981 real 0m19.572s 00:07:45.981 user 0m53.153s 00:07:45.981 sys 0m5.409s 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:45.981 02:33:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.981 ************************************ 00:07:45.981 END TEST nvmf_example 00:07:45.981 ************************************ 00:07:45.981 02:33:48 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:45.981 02:33:48 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:45.981 02:33:48 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:45.981 02:33:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:45.981 ************************************ 00:07:45.981 START TEST nvmf_filesystem 00:07:45.981 ************************************ 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:45.981 * Looking for test storage... 00:07:45.981 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:45.981 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:45.982 #define SPDK_CONFIG_H 00:07:45.982 #define SPDK_CONFIG_APPS 1 00:07:45.982 #define SPDK_CONFIG_ARCH native 00:07:45.982 #undef SPDK_CONFIG_ASAN 00:07:45.982 #undef SPDK_CONFIG_AVAHI 00:07:45.982 #undef SPDK_CONFIG_CET 00:07:45.982 #define SPDK_CONFIG_COVERAGE 1 00:07:45.982 #define SPDK_CONFIG_CROSS_PREFIX 00:07:45.982 #undef SPDK_CONFIG_CRYPTO 00:07:45.982 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:45.982 #undef SPDK_CONFIG_CUSTOMOCF 00:07:45.982 #undef SPDK_CONFIG_DAOS 00:07:45.982 #define SPDK_CONFIG_DAOS_DIR 00:07:45.982 #define SPDK_CONFIG_DEBUG 1 00:07:45.982 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:45.982 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:45.982 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:45.982 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:45.982 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:45.982 #undef SPDK_CONFIG_DPDK_UADK 00:07:45.982 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:45.982 #define SPDK_CONFIG_EXAMPLES 1 00:07:45.982 #undef SPDK_CONFIG_FC 00:07:45.982 #define SPDK_CONFIG_FC_PATH 00:07:45.982 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:45.982 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:45.982 #undef SPDK_CONFIG_FUSE 00:07:45.982 #undef SPDK_CONFIG_FUZZER 00:07:45.982 #define SPDK_CONFIG_FUZZER_LIB 00:07:45.982 #undef SPDK_CONFIG_GOLANG 00:07:45.982 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:45.982 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:45.982 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:45.982 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:45.982 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:45.982 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:45.982 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:45.982 #define SPDK_CONFIG_IDXD 1 00:07:45.982 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:45.982 #undef SPDK_CONFIG_IPSEC_MB 00:07:45.982 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:45.982 #define SPDK_CONFIG_ISAL 1 00:07:45.982 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:45.982 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:45.982 #define SPDK_CONFIG_LIBDIR 00:07:45.982 #undef SPDK_CONFIG_LTO 00:07:45.982 #define SPDK_CONFIG_MAX_LCORES 00:07:45.982 #define SPDK_CONFIG_NVME_CUSE 1 00:07:45.982 #undef SPDK_CONFIG_OCF 00:07:45.982 #define SPDK_CONFIG_OCF_PATH 00:07:45.982 #define SPDK_CONFIG_OPENSSL_PATH 00:07:45.982 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:45.982 #define SPDK_CONFIG_PGO_DIR 00:07:45.982 #undef SPDK_CONFIG_PGO_USE 00:07:45.982 #define SPDK_CONFIG_PREFIX /usr/local 00:07:45.982 #undef SPDK_CONFIG_RAID5F 00:07:45.982 #undef SPDK_CONFIG_RBD 00:07:45.982 #define SPDK_CONFIG_RDMA 1 00:07:45.982 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:45.982 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:45.982 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:45.982 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:45.982 #define SPDK_CONFIG_SHARED 1 00:07:45.982 #undef SPDK_CONFIG_SMA 00:07:45.982 #define SPDK_CONFIG_TESTS 1 00:07:45.982 #undef SPDK_CONFIG_TSAN 00:07:45.982 #define SPDK_CONFIG_UBLK 1 00:07:45.982 #define SPDK_CONFIG_UBSAN 1 00:07:45.982 #undef SPDK_CONFIG_UNIT_TESTS 00:07:45.982 #undef SPDK_CONFIG_URING 00:07:45.982 #define SPDK_CONFIG_URING_PATH 00:07:45.982 #undef SPDK_CONFIG_URING_ZNS 00:07:45.982 #undef SPDK_CONFIG_USDT 00:07:45.982 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:45.982 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:45.982 #undef SPDK_CONFIG_VFIO_USER 00:07:45.982 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:45.982 #define SPDK_CONFIG_VHOST 1 00:07:45.982 #define SPDK_CONFIG_VIRTIO 1 00:07:45.982 #undef SPDK_CONFIG_VTUNE 00:07:45.982 #define SPDK_CONFIG_VTUNE_DIR 00:07:45.982 #define SPDK_CONFIG_WERROR 1 00:07:45.982 #define SPDK_CONFIG_WPDK_DIR 00:07:45.982 #undef SPDK_CONFIG_XNVME 00:07:45.982 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.982 02:33:48 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:45.983 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:45.984 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 684168 ]] 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 684168 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.6dQYjf 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.6dQYjf/tests/target /tmp/spdk.6dQYjf 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=972910592 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4311519232 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=54858928128 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742718976 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6883790848 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30867984384 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871359488 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12339384320 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348547072 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9162752 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30870925312 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871359488 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=434176 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6174265344 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174269440 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:45.985 * Looking for test storage... 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=54858928128 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=9098383360 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.985 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set -o errtrace 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1684 -- # true 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1686 -- # xtrace_fd 00:07:45.985 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.986 02:33:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.986 02:33:49 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:52.553 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:52.553 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:52.553 Found net devices under 0000:18:00.0: mlx_0_0 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:52.553 Found net devices under 0000:18:00.1: mlx_0_1 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.553 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:52.554 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:52.554 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:07:52.554 altname enp24s0f0np0 00:07:52.554 altname ens785f0np0 00:07:52.554 inet 192.168.100.8/24 scope global mlx_0_0 00:07:52.554 valid_lft forever preferred_lft forever 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:52.554 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:52.554 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:07:52.554 altname enp24s0f1np1 00:07:52.554 altname ens785f1np1 00:07:52.554 inet 192.168.100.9/24 scope global mlx_0_1 00:07:52.554 valid_lft forever preferred_lft forever 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:52.554 192.168.100.9' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:52.554 192.168.100.9' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:52.554 192.168.100.9' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.554 ************************************ 00:07:52.554 START TEST nvmf_filesystem_no_in_capsule 00:07:52.554 ************************************ 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 0 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=687054 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 687054 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 687054 ']' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:52.554 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.554 [2024-05-15 02:33:55.697367] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:52.554 [2024-05-15 02:33:55.697432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.554 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.554 [2024-05-15 02:33:55.805372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.813 [2024-05-15 02:33:55.860474] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.813 [2024-05-15 02:33:55.860522] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.814 [2024-05-15 02:33:55.860536] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.814 [2024-05-15 02:33:55.860550] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.814 [2024-05-15 02:33:55.860561] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.814 [2024-05-15 02:33:55.860619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.814 [2024-05-15 02:33:55.860708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.814 [2024-05-15 02:33:55.860810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.814 [2024-05-15 02:33:55.860810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.814 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:52.814 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:07:52.814 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.814 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:52.814 02:33:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.814 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:52.814 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:52.814 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.814 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.814 [2024-05-15 02:33:56.030578] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:52.814 [2024-05-15 02:33:56.059759] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2142d70/0x2147260) succeed. 00:07:52.814 [2024-05-15 02:33:56.074869] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21443b0/0x21888f0) succeed. 00:07:53.073 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.073 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:53.073 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.073 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.073 Malloc1 00:07:53.073 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.073 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:53.073 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.073 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.073 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.073 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:53.073 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.073 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.332 [2024-05-15 02:33:56.375113] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:53.332 [2024-05-15 02:33:56.375509] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:07:53.332 { 00:07:53.332 "name": "Malloc1", 00:07:53.332 "aliases": [ 00:07:53.332 "5ca31607-5f85-424c-9fbf-8617b1e887a5" 00:07:53.332 ], 00:07:53.332 "product_name": "Malloc disk", 00:07:53.332 "block_size": 512, 00:07:53.332 "num_blocks": 1048576, 00:07:53.332 "uuid": "5ca31607-5f85-424c-9fbf-8617b1e887a5", 00:07:53.332 "assigned_rate_limits": { 00:07:53.332 "rw_ios_per_sec": 0, 00:07:53.332 "rw_mbytes_per_sec": 0, 00:07:53.332 "r_mbytes_per_sec": 0, 00:07:53.332 "w_mbytes_per_sec": 0 00:07:53.332 }, 00:07:53.332 "claimed": true, 00:07:53.332 "claim_type": "exclusive_write", 00:07:53.332 "zoned": false, 00:07:53.332 "supported_io_types": { 00:07:53.332 "read": true, 00:07:53.332 "write": true, 00:07:53.332 "unmap": true, 00:07:53.332 "write_zeroes": true, 00:07:53.332 "flush": true, 00:07:53.332 "reset": true, 00:07:53.332 "compare": false, 00:07:53.332 "compare_and_write": false, 00:07:53.332 "abort": true, 00:07:53.332 "nvme_admin": false, 00:07:53.332 "nvme_io": false 00:07:53.332 }, 00:07:53.332 "memory_domains": [ 00:07:53.332 { 00:07:53.332 "dma_device_id": "system", 00:07:53.332 "dma_device_type": 1 00:07:53.332 }, 00:07:53.332 { 00:07:53.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.332 "dma_device_type": 2 00:07:53.332 } 00:07:53.332 ], 00:07:53.332 "driver_specific": {} 00:07:53.332 } 00:07:53.332 ]' 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:53.332 02:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:54.268 02:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:54.268 02:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:07:54.268 02:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:54.268 02:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:54.268 02:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:56.803 02:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.740 ************************************ 00:07:57.740 START TEST filesystem_ext4 00:07:57.740 ************************************ 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:07:57.740 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local force 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:57.741 mke2fs 1.46.5 (30-Dec-2021) 00:07:57.741 Discarding device blocks: 0/522240 done 00:07:57.741 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:57.741 Filesystem UUID: 6d5e0cd1-b8dc-464d-bbb3-ca95838514d5 00:07:57.741 Superblock backups stored on blocks: 00:07:57.741 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:57.741 00:07:57.741 Allocating group tables: 0/64 done 00:07:57.741 Writing inode tables: 0/64 done 00:07:57.741 Creating journal (8192 blocks): done 00:07:57.741 Writing superblocks and filesystem accounting information: 0/64 done 00:07:57.741 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@942 -- # return 0 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 687054 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.741 00:07:57.741 real 0m0.204s 00:07:57.741 user 0m0.027s 00:07:57.741 sys 0m0.074s 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:57.741 02:34:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:57.741 ************************************ 00:07:57.741 END TEST filesystem_ext4 00:07:57.741 ************************************ 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.001 ************************************ 00:07:58.001 START TEST filesystem_btrfs 00:07:58.001 ************************************ 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local force 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:58.001 btrfs-progs v6.6.2 00:07:58.001 See https://btrfs.readthedocs.io for more information. 00:07:58.001 00:07:58.001 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:58.001 NOTE: several default settings have changed in version 5.15, please make sure 00:07:58.001 this does not affect your deployments: 00:07:58.001 - DUP for metadata (-m dup) 00:07:58.001 - enabled no-holes (-O no-holes) 00:07:58.001 - enabled free-space-tree (-R free-space-tree) 00:07:58.001 00:07:58.001 Label: (null) 00:07:58.001 UUID: 5535fefd-822f-4c79-bbec-e3a42d00df54 00:07:58.001 Node size: 16384 00:07:58.001 Sector size: 4096 00:07:58.001 Filesystem size: 510.00MiB 00:07:58.001 Block group profiles: 00:07:58.001 Data: single 8.00MiB 00:07:58.001 Metadata: DUP 32.00MiB 00:07:58.001 System: DUP 8.00MiB 00:07:58.001 SSD detected: yes 00:07:58.001 Zoned device: no 00:07:58.001 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:58.001 Runtime features: free-space-tree 00:07:58.001 Checksum: crc32c 00:07:58.001 Number of devices: 1 00:07:58.001 Devices: 00:07:58.001 ID SIZE PATH 00:07:58.001 1 510.00MiB /dev/nvme0n1p1 00:07:58.001 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@942 -- # return 0 00:07:58.001 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 687054 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.260 00:07:58.260 real 0m0.319s 00:07:58.260 user 0m0.021s 00:07:58.260 sys 0m0.193s 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:58.260 ************************************ 00:07:58.260 END TEST filesystem_btrfs 00:07:58.260 ************************************ 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.260 ************************************ 00:07:58.260 START TEST filesystem_xfs 00:07:58.260 ************************************ 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local i=0 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local force 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # force=-f 00:07:58.260 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:58.519 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:58.519 = sectsz=512 attr=2, projid32bit=1 00:07:58.519 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:58.519 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:58.519 data = bsize=4096 blocks=130560, imaxpct=25 00:07:58.519 = sunit=0 swidth=0 blks 00:07:58.519 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:58.519 log =internal log bsize=4096 blocks=16384, version=2 00:07:58.519 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:58.519 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:58.519 Discarding blocks...Done. 00:07:58.519 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@942 -- # return 0 00:07:58.519 02:34:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 687054 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.086 00:07:59.086 real 0m0.869s 00:07:59.086 user 0m0.027s 00:07:59.086 sys 0m0.111s 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:59.086 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:59.086 ************************************ 00:07:59.086 END TEST filesystem_xfs 00:07:59.086 ************************************ 00:07:59.345 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:59.345 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:59.345 02:34:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:00.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 687054 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 687054 ']' 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # kill -0 687054 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # uname 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 687054 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 687054' 00:08:00.282 killing process with pid 687054 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # kill 687054 00:08:00.282 [2024-05-15 02:34:03.520846] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:00.282 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # wait 687054 00:08:00.541 [2024-05-15 02:34:03.597572] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:00.799 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:00.799 00:08:00.799 real 0m8.354s 00:08:00.799 user 0m32.313s 00:08:00.799 sys 0m1.406s 00:08:00.799 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:00.799 02:34:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.799 ************************************ 00:08:00.799 END TEST nvmf_filesystem_no_in_capsule 00:08:00.799 ************************************ 00:08:00.799 02:34:04 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:00.799 02:34:04 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:00.799 02:34:04 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:00.799 02:34:04 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.058 ************************************ 00:08:01.058 START TEST nvmf_filesystem_in_capsule 00:08:01.058 ************************************ 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 4096 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=688358 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 688358 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 688358 ']' 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:01.058 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.058 [2024-05-15 02:34:04.156044] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:08:01.058 [2024-05-15 02:34:04.156117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.058 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.058 [2024-05-15 02:34:04.268512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.058 [2024-05-15 02:34:04.321757] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.058 [2024-05-15 02:34:04.321804] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.058 [2024-05-15 02:34:04.321819] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.058 [2024-05-15 02:34:04.321832] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.058 [2024-05-15 02:34:04.321843] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.058 [2024-05-15 02:34:04.321925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.058 [2024-05-15 02:34:04.321975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.058 [2024-05-15 02:34:04.322097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.058 [2024-05-15 02:34:04.322097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.317 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:01.317 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:08:01.317 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:01.317 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:01.317 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.317 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.317 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:01.317 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:01.317 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.317 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.317 [2024-05-15 02:34:04.519477] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8ead70/0x8ef260) succeed. 00:08:01.317 [2024-05-15 02:34:04.534566] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8ec3b0/0x9308f0) succeed. 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.576 Malloc1 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.576 [2024-05-15 02:34:04.858530] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:01.576 [2024-05-15 02:34:04.858924] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.576 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:01.835 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:08:01.835 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:08:01.835 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:08:01.835 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:08:01.835 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:01.835 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.835 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.835 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.835 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:08:01.835 { 00:08:01.835 "name": "Malloc1", 00:08:01.835 "aliases": [ 00:08:01.835 "ae85b84a-6db7-4d06-a1c5-86c6a56fc897" 00:08:01.835 ], 00:08:01.835 "product_name": "Malloc disk", 00:08:01.835 "block_size": 512, 00:08:01.835 "num_blocks": 1048576, 00:08:01.835 "uuid": "ae85b84a-6db7-4d06-a1c5-86c6a56fc897", 00:08:01.835 "assigned_rate_limits": { 00:08:01.835 "rw_ios_per_sec": 0, 00:08:01.835 "rw_mbytes_per_sec": 0, 00:08:01.835 "r_mbytes_per_sec": 0, 00:08:01.835 "w_mbytes_per_sec": 0 00:08:01.835 }, 00:08:01.835 "claimed": true, 00:08:01.835 "claim_type": "exclusive_write", 00:08:01.835 "zoned": false, 00:08:01.835 "supported_io_types": { 00:08:01.835 "read": true, 00:08:01.835 "write": true, 00:08:01.835 "unmap": true, 00:08:01.835 "write_zeroes": true, 00:08:01.835 "flush": true, 00:08:01.835 "reset": true, 00:08:01.835 "compare": false, 00:08:01.835 "compare_and_write": false, 00:08:01.835 "abort": true, 00:08:01.835 "nvme_admin": false, 00:08:01.835 "nvme_io": false 00:08:01.836 }, 00:08:01.836 "memory_domains": [ 00:08:01.836 { 00:08:01.836 "dma_device_id": "system", 00:08:01.836 "dma_device_type": 1 00:08:01.836 }, 00:08:01.836 { 00:08:01.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.836 "dma_device_type": 2 00:08:01.836 } 00:08:01.836 ], 00:08:01.836 "driver_specific": {} 00:08:01.836 } 00:08:01.836 ]' 00:08:01.836 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:08:01.836 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:08:01.836 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:08:01.836 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:08:01.836 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:08:01.836 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:08:01.836 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:01.836 02:34:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:02.773 02:34:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.773 02:34:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:08:02.773 02:34:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.773 02:34:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:08:02.773 02:34:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:08:05.308 02:34:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:08:05.308 02:34:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:08:05.308 02:34:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.308 02:34:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:08:05.308 02:34:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.308 02:34:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:08:05.308 02:34:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:05.308 02:34:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:05.308 02:34:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:05.308 02:34:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:05.308 02:34:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:05.308 02:34:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:05.308 02:34:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:05.308 02:34:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:05.308 02:34:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:05.308 02:34:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:05.308 02:34:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:05.308 02:34:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:05.308 02:34:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:05.941 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:05.941 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:05.941 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:08:05.941 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:05.941 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.941 ************************************ 00:08:05.941 START TEST filesystem_in_capsule_ext4 00:08:05.941 ************************************ 00:08:05.941 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:05.941 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:06.199 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.199 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:06.199 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:08:06.199 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:08:06.199 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:08:06.199 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local force 00:08:06.199 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:08:06.199 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:08:06.199 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:06.199 mke2fs 1.46.5 (30-Dec-2021) 00:08:06.200 Discarding device blocks: 0/522240 done 00:08:06.200 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:06.200 Filesystem UUID: 5b502353-b3c3-480a-8b0b-77029ca31fc2 00:08:06.200 Superblock backups stored on blocks: 00:08:06.200 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:06.200 00:08:06.200 Allocating group tables: 0/64 done 00:08:06.200 Writing inode tables: 0/64 done 00:08:06.200 Creating journal (8192 blocks): done 00:08:06.200 Writing superblocks and filesystem accounting information: 0/64 done 00:08:06.200 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@942 -- # return 0 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 688358 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.200 00:08:06.200 real 0m0.213s 00:08:06.200 user 0m0.034s 00:08:06.200 sys 0m0.068s 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:06.200 ************************************ 00:08:06.200 END TEST filesystem_in_capsule_ext4 00:08:06.200 ************************************ 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:06.200 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.458 ************************************ 00:08:06.458 START TEST filesystem_in_capsule_btrfs 00:08:06.458 ************************************ 00:08:06.458 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:06.458 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:06.458 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.458 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:06.459 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:08:06.459 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:08:06.459 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:08:06.459 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local force 00:08:06.459 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:08:06.459 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:08:06.459 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:06.459 btrfs-progs v6.6.2 00:08:06.459 See https://btrfs.readthedocs.io for more information. 00:08:06.459 00:08:06.459 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:06.459 NOTE: several default settings have changed in version 5.15, please make sure 00:08:06.459 this does not affect your deployments: 00:08:06.459 - DUP for metadata (-m dup) 00:08:06.459 - enabled no-holes (-O no-holes) 00:08:06.459 - enabled free-space-tree (-R free-space-tree) 00:08:06.459 00:08:06.459 Label: (null) 00:08:06.459 UUID: 8a56b079-096a-4a49-9a6f-f56f01e09c9e 00:08:06.459 Node size: 16384 00:08:06.459 Sector size: 4096 00:08:06.459 Filesystem size: 510.00MiB 00:08:06.459 Block group profiles: 00:08:06.459 Data: single 8.00MiB 00:08:06.459 Metadata: DUP 32.00MiB 00:08:06.459 System: DUP 8.00MiB 00:08:06.459 SSD detected: yes 00:08:06.459 Zoned device: no 00:08:06.459 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:06.459 Runtime features: free-space-tree 00:08:06.459 Checksum: crc32c 00:08:06.459 Number of devices: 1 00:08:06.459 Devices: 00:08:06.459 ID SIZE PATH 00:08:06.459 1 510.00MiB /dev/nvme0n1p1 00:08:06.459 00:08:06.459 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@942 -- # return 0 00:08:06.459 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 688358 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.717 00:08:06.717 real 0m0.286s 00:08:06.717 user 0m0.024s 00:08:06.717 sys 0m0.146s 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:06.717 ************************************ 00:08:06.717 END TEST filesystem_in_capsule_btrfs 00:08:06.717 ************************************ 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.717 ************************************ 00:08:06.717 START TEST filesystem_in_capsule_xfs 00:08:06.717 ************************************ 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local i=0 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local force 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # force=-f 00:08:06.717 02:34:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:06.976 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:06.977 = sectsz=512 attr=2, projid32bit=1 00:08:06.977 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:06.977 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:06.977 data = bsize=4096 blocks=130560, imaxpct=25 00:08:06.977 = sunit=0 swidth=0 blks 00:08:06.977 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:06.977 log =internal log bsize=4096 blocks=16384, version=2 00:08:06.977 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:06.977 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:06.977 Discarding blocks...Done. 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@942 -- # return 0 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 688358 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.977 00:08:06.977 real 0m0.223s 00:08:06.977 user 0m0.025s 00:08:06.977 sys 0m0.080s 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:06.977 ************************************ 00:08:06.977 END TEST filesystem_in_capsule_xfs 00:08:06.977 ************************************ 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:06.977 02:34:10 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:07.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.913 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:07.913 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:08:07.913 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:08:07.913 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.913 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:08:07.913 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 688358 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 688358 ']' 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # kill -0 688358 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # uname 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 688358 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 688358' 00:08:08.172 killing process with pid 688358 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # kill 688358 00:08:08.172 [2024-05-15 02:34:11.276747] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:08.172 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # wait 688358 00:08:08.172 [2024-05-15 02:34:11.385563] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:08.738 00:08:08.738 real 0m7.693s 00:08:08.738 user 0m29.620s 00:08:08.738 sys 0m1.348s 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.738 ************************************ 00:08:08.738 END TEST nvmf_filesystem_in_capsule 00:08:08.738 ************************************ 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:08.738 rmmod nvme_rdma 00:08:08.738 rmmod nvme_fabrics 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:08.738 00:08:08.738 real 0m23.188s 00:08:08.738 user 1m3.993s 00:08:08.738 sys 0m8.059s 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:08.738 02:34:11 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.738 ************************************ 00:08:08.738 END TEST nvmf_filesystem 00:08:08.738 ************************************ 00:08:08.738 02:34:11 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:08.738 02:34:11 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:08.738 02:34:11 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:08.738 02:34:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:08.738 ************************************ 00:08:08.738 START TEST nvmf_target_discovery 00:08:08.738 ************************************ 00:08:08.738 02:34:11 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:08.997 * Looking for test storage... 00:08:08.997 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.997 02:34:12 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:15.566 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:15.566 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:15.567 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:15.567 Found net devices under 0000:18:00.0: mlx_0_0 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:15.567 Found net devices under 0000:18:00.1: mlx_0_1 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:15.567 02:34:17 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:15.567 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:15.567 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:08:15.567 altname enp24s0f0np0 00:08:15.567 altname ens785f0np0 00:08:15.567 inet 192.168.100.8/24 scope global mlx_0_0 00:08:15.567 valid_lft forever preferred_lft forever 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:15.567 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:15.567 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:08:15.567 altname enp24s0f1np1 00:08:15.567 altname ens785f1np1 00:08:15.567 inet 192.168.100.9/24 scope global mlx_0_1 00:08:15.567 valid_lft forever preferred_lft forever 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:15.567 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:15.568 192.168.100.9' 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:15.568 192.168.100.9' 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:15.568 192.168.100.9' 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=692373 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 692373 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@828 -- # '[' -z 692373 ']' 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.568 [2024-05-15 02:34:18.183242] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:08:15.568 [2024-05-15 02:34:18.183311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.568 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.568 [2024-05-15 02:34:18.292994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.568 [2024-05-15 02:34:18.345203] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.568 [2024-05-15 02:34:18.345251] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.568 [2024-05-15 02:34:18.345268] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.568 [2024-05-15 02:34:18.345281] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.568 [2024-05-15 02:34:18.345291] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.568 [2024-05-15 02:34:18.345350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.568 [2024-05-15 02:34:18.345435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.568 [2024-05-15 02:34:18.345538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.568 [2024-05-15 02:34:18.345538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@861 -- # return 0 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 [2024-05-15 02:34:18.547448] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1309d70/0x130e260) succeed. 00:08:15.568 [2024-05-15 02:34:18.562307] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x130b3b0/0x134f8f0) succeed. 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 Null1 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 [2024-05-15 02:34:18.772210] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:15.568 [2024-05-15 02:34:18.772557] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 Null2 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 Null3 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.568 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.569 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:15.569 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:15.569 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.569 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.569 Null4 00:08:15.569 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.569 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:15.569 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.569 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.829 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -a 192.168.100.8 -s 4420 00:08:15.829 00:08:15.829 Discovery Log Number of Records 6, Generation counter 6 00:08:15.829 =====Discovery Log Entry 0====== 00:08:15.829 trtype: rdma 00:08:15.829 adrfam: ipv4 00:08:15.829 subtype: current discovery subsystem 00:08:15.829 treq: not required 00:08:15.829 portid: 0 00:08:15.829 trsvcid: 4420 00:08:15.829 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:15.829 traddr: 192.168.100.8 00:08:15.829 eflags: explicit discovery connections, duplicate discovery information 00:08:15.829 rdma_prtype: not specified 00:08:15.829 rdma_qptype: connected 00:08:15.829 rdma_cms: rdma-cm 00:08:15.829 rdma_pkey: 0x0000 00:08:15.829 =====Discovery Log Entry 1====== 00:08:15.829 trtype: rdma 00:08:15.829 adrfam: ipv4 00:08:15.829 subtype: nvme subsystem 00:08:15.829 treq: not required 00:08:15.829 portid: 0 00:08:15.829 trsvcid: 4420 00:08:15.829 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:15.829 traddr: 192.168.100.8 00:08:15.829 eflags: none 00:08:15.829 rdma_prtype: not specified 00:08:15.829 rdma_qptype: connected 00:08:15.829 rdma_cms: rdma-cm 00:08:15.829 rdma_pkey: 0x0000 00:08:15.829 =====Discovery Log Entry 2====== 00:08:15.829 trtype: rdma 00:08:15.829 adrfam: ipv4 00:08:15.829 subtype: nvme subsystem 00:08:15.829 treq: not required 00:08:15.829 portid: 0 00:08:15.829 trsvcid: 4420 00:08:15.829 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:15.829 traddr: 192.168.100.8 00:08:15.829 eflags: none 00:08:15.829 rdma_prtype: not specified 00:08:15.829 rdma_qptype: connected 00:08:15.829 rdma_cms: rdma-cm 00:08:15.829 rdma_pkey: 0x0000 00:08:15.829 =====Discovery Log Entry 3====== 00:08:15.829 trtype: rdma 00:08:15.829 adrfam: ipv4 00:08:15.829 subtype: nvme subsystem 00:08:15.829 treq: not required 00:08:15.829 portid: 0 00:08:15.829 trsvcid: 4420 00:08:15.829 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:15.829 traddr: 192.168.100.8 00:08:15.829 eflags: none 00:08:15.829 rdma_prtype: not specified 00:08:15.829 rdma_qptype: connected 00:08:15.830 rdma_cms: rdma-cm 00:08:15.830 rdma_pkey: 0x0000 00:08:15.830 =====Discovery Log Entry 4====== 00:08:15.830 trtype: rdma 00:08:15.830 adrfam: ipv4 00:08:15.830 subtype: nvme subsystem 00:08:15.830 treq: not required 00:08:15.830 portid: 0 00:08:15.830 trsvcid: 4420 00:08:15.830 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:15.830 traddr: 192.168.100.8 00:08:15.830 eflags: none 00:08:15.830 rdma_prtype: not specified 00:08:15.830 rdma_qptype: connected 00:08:15.830 rdma_cms: rdma-cm 00:08:15.830 rdma_pkey: 0x0000 00:08:15.830 =====Discovery Log Entry 5====== 00:08:15.830 trtype: rdma 00:08:15.830 adrfam: ipv4 00:08:15.830 subtype: discovery subsystem referral 00:08:15.830 treq: not required 00:08:15.830 portid: 0 00:08:15.830 trsvcid: 4430 00:08:15.830 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:15.830 traddr: 192.168.100.8 00:08:15.830 eflags: none 00:08:15.830 rdma_prtype: unrecognized 00:08:15.830 rdma_qptype: unrecognized 00:08:15.830 rdma_cms: unrecognized 00:08:15.830 rdma_pkey: 0x0000 00:08:15.830 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:15.830 Perform nvmf subsystem discovery via RPC 00:08:15.830 02:34:18 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:15.830 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.830 02:34:18 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.830 [ 00:08:15.830 { 00:08:15.830 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:15.830 "subtype": "Discovery", 00:08:15.830 "listen_addresses": [ 00:08:15.830 { 00:08:15.830 "trtype": "RDMA", 00:08:15.830 "adrfam": "IPv4", 00:08:15.830 "traddr": "192.168.100.8", 00:08:15.830 "trsvcid": "4420" 00:08:15.830 } 00:08:15.830 ], 00:08:15.830 "allow_any_host": true, 00:08:15.830 "hosts": [] 00:08:15.830 }, 00:08:15.830 { 00:08:15.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.830 "subtype": "NVMe", 00:08:15.830 "listen_addresses": [ 00:08:15.830 { 00:08:15.830 "trtype": "RDMA", 00:08:15.830 "adrfam": "IPv4", 00:08:15.830 "traddr": "192.168.100.8", 00:08:15.830 "trsvcid": "4420" 00:08:15.830 } 00:08:15.830 ], 00:08:15.830 "allow_any_host": true, 00:08:15.830 "hosts": [], 00:08:15.830 "serial_number": "SPDK00000000000001", 00:08:15.830 "model_number": "SPDK bdev Controller", 00:08:15.830 "max_namespaces": 32, 00:08:15.830 "min_cntlid": 1, 00:08:15.830 "max_cntlid": 65519, 00:08:15.830 "namespaces": [ 00:08:15.830 { 00:08:15.830 "nsid": 1, 00:08:15.830 "bdev_name": "Null1", 00:08:15.830 "name": "Null1", 00:08:15.830 "nguid": "A43DD8C9ECAA43DFB1C0E1F0F02CE50E", 00:08:15.830 "uuid": "a43dd8c9-ecaa-43df-b1c0-e1f0f02ce50e" 00:08:15.830 } 00:08:15.830 ] 00:08:15.830 }, 00:08:15.830 { 00:08:15.830 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:15.830 "subtype": "NVMe", 00:08:15.830 "listen_addresses": [ 00:08:15.830 { 00:08:15.830 "trtype": "RDMA", 00:08:15.830 "adrfam": "IPv4", 00:08:15.830 "traddr": "192.168.100.8", 00:08:15.830 "trsvcid": "4420" 00:08:15.830 } 00:08:15.830 ], 00:08:15.830 "allow_any_host": true, 00:08:15.830 "hosts": [], 00:08:15.830 "serial_number": "SPDK00000000000002", 00:08:15.830 "model_number": "SPDK bdev Controller", 00:08:15.830 "max_namespaces": 32, 00:08:15.830 "min_cntlid": 1, 00:08:15.830 "max_cntlid": 65519, 00:08:15.830 "namespaces": [ 00:08:15.830 { 00:08:15.830 "nsid": 1, 00:08:15.830 "bdev_name": "Null2", 00:08:15.830 "name": "Null2", 00:08:15.830 "nguid": "77AAF66026D04C639EB5A8AB59E0F1A2", 00:08:15.830 "uuid": "77aaf660-26d0-4c63-9eb5-a8ab59e0f1a2" 00:08:15.830 } 00:08:15.830 ] 00:08:15.830 }, 00:08:15.830 { 00:08:15.830 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:15.830 "subtype": "NVMe", 00:08:15.830 "listen_addresses": [ 00:08:15.830 { 00:08:15.830 "trtype": "RDMA", 00:08:15.830 "adrfam": "IPv4", 00:08:15.830 "traddr": "192.168.100.8", 00:08:15.830 "trsvcid": "4420" 00:08:15.830 } 00:08:15.830 ], 00:08:15.830 "allow_any_host": true, 00:08:15.830 "hosts": [], 00:08:15.830 "serial_number": "SPDK00000000000003", 00:08:15.830 "model_number": "SPDK bdev Controller", 00:08:15.830 "max_namespaces": 32, 00:08:15.830 "min_cntlid": 1, 00:08:15.830 "max_cntlid": 65519, 00:08:15.830 "namespaces": [ 00:08:15.830 { 00:08:15.830 "nsid": 1, 00:08:15.830 "bdev_name": "Null3", 00:08:15.830 "name": "Null3", 00:08:15.830 "nguid": "768477CC18114FF890D18CD09DF79E3E", 00:08:15.830 "uuid": "768477cc-1811-4ff8-90d1-8cd09df79e3e" 00:08:15.830 } 00:08:15.830 ] 00:08:15.830 }, 00:08:15.830 { 00:08:15.830 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:15.830 "subtype": "NVMe", 00:08:15.830 "listen_addresses": [ 00:08:15.830 { 00:08:15.830 "trtype": "RDMA", 00:08:15.830 "adrfam": "IPv4", 00:08:15.830 "traddr": "192.168.100.8", 00:08:15.830 "trsvcid": "4420" 00:08:15.830 } 00:08:15.830 ], 00:08:15.830 "allow_any_host": true, 00:08:15.830 "hosts": [], 00:08:15.830 "serial_number": "SPDK00000000000004", 00:08:15.830 "model_number": "SPDK bdev Controller", 00:08:15.830 "max_namespaces": 32, 00:08:15.830 "min_cntlid": 1, 00:08:15.830 "max_cntlid": 65519, 00:08:15.830 "namespaces": [ 00:08:15.830 { 00:08:15.830 "nsid": 1, 00:08:15.830 "bdev_name": "Null4", 00:08:15.830 "name": "Null4", 00:08:15.830 "nguid": "F0919DB40815470A935CBBB8E944CFBE", 00:08:15.830 "uuid": "f0919db4-0815-470a-935c-bbb8e944cfbe" 00:08:15.830 } 00:08:15.830 ] 00:08:15.830 } 00:08:15.830 ] 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:15.830 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:16.090 rmmod nvme_rdma 00:08:16.090 rmmod nvme_fabrics 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 692373 ']' 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 692373 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' -z 692373 ']' 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@951 -- # kill -0 692373 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # uname 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 692373 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 692373' 00:08:16.090 killing process with pid 692373 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@966 -- # kill 692373 00:08:16.090 [2024-05-15 02:34:19.242339] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:16.090 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@971 -- # wait 692373 00:08:16.090 [2024-05-15 02:34:19.349188] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:16.349 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.349 02:34:19 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:16.349 00:08:16.349 real 0m7.563s 00:08:16.349 user 0m6.306s 00:08:16.349 sys 0m5.123s 00:08:16.349 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:16.349 02:34:19 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.349 ************************************ 00:08:16.349 END TEST nvmf_target_discovery 00:08:16.349 ************************************ 00:08:16.349 02:34:19 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:16.349 02:34:19 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:16.349 02:34:19 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:16.349 02:34:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:16.609 ************************************ 00:08:16.609 START TEST nvmf_referrals 00:08:16.609 ************************************ 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:16.609 * Looking for test storage... 00:08:16.609 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.609 02:34:19 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.610 02:34:19 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:23.188 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:23.188 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:23.188 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:23.189 Found net devices under 0000:18:00.0: mlx_0_0 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:23.189 Found net devices under 0000:18:00.1: mlx_0_1 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:23.189 02:34:25 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:23.189 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:23.189 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:08:23.189 altname enp24s0f0np0 00:08:23.189 altname ens785f0np0 00:08:23.189 inet 192.168.100.8/24 scope global mlx_0_0 00:08:23.189 valid_lft forever preferred_lft forever 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:23.189 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:23.189 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:08:23.189 altname enp24s0f1np1 00:08:23.189 altname ens785f1np1 00:08:23.189 inet 192.168.100.9/24 scope global mlx_0_1 00:08:23.189 valid_lft forever preferred_lft forever 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:23.189 192.168.100.9' 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:23.189 192.168.100.9' 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:23.189 192.168.100.9' 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:23.189 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=695480 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 695480 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@828 -- # '[' -z 695480 ']' 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:23.190 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.190 [2024-05-15 02:34:26.271525] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:08:23.190 [2024-05-15 02:34:26.271598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.190 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.190 [2024-05-15 02:34:26.382695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.190 [2024-05-15 02:34:26.435783] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.190 [2024-05-15 02:34:26.435833] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.190 [2024-05-15 02:34:26.435848] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.190 [2024-05-15 02:34:26.435867] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.190 [2024-05-15 02:34:26.435878] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.190 [2024-05-15 02:34:26.435983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.190 [2024-05-15 02:34:26.436080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.190 [2024-05-15 02:34:26.436126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.190 [2024-05-15 02:34:26.436126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.449 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:23.449 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@861 -- # return 0 00:08:23.449 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:23.449 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:23.449 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.449 02:34:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.449 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:23.449 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.449 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.449 [2024-05-15 02:34:26.640789] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1854d70/0x1859260) succeed. 00:08:23.449 [2024-05-15 02:34:26.655884] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18563b0/0x189a8f0) succeed. 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.709 [2024-05-15 02:34:26.811756] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:23.709 [2024-05-15 02:34:26.812167] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:23.709 02:34:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:23.968 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:23.969 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.969 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.969 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:24.227 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.227 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:24.227 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:24.227 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:24.227 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.227 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.227 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:24.227 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.228 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.228 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:24.228 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:24.228 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:24.228 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:24.228 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:24.228 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:24.228 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:24.228 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.487 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:24.746 02:34:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:24.746 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:24.746 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:24.747 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.747 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.747 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.747 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.747 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:24.747 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.747 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.005 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:25.005 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:25.005 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:25.005 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.005 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.005 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:25.005 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:25.006 rmmod nvme_rdma 00:08:25.006 rmmod nvme_fabrics 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 695480 ']' 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 695480 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' -z 695480 ']' 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@951 -- # kill -0 695480 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # uname 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:08:25.006 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 695480 00:08:25.264 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:08:25.264 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:08:25.264 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@965 -- # echo 'killing process with pid 695480' 00:08:25.264 killing process with pid 695480 00:08:25.264 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@966 -- # kill 695480 00:08:25.265 [2024-05-15 02:34:28.307243] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:25.265 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@971 -- # wait 695480 00:08:25.265 [2024-05-15 02:34:28.416383] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:25.524 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:25.524 02:34:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:25.524 00:08:25.524 real 0m8.967s 00:08:25.524 user 0m11.094s 00:08:25.524 sys 0m5.791s 00:08:25.524 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:25.524 02:34:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.524 ************************************ 00:08:25.524 END TEST nvmf_referrals 00:08:25.524 ************************************ 00:08:25.524 02:34:28 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:25.524 02:34:28 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:08:25.524 02:34:28 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:25.524 02:34:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:25.524 ************************************ 00:08:25.524 START TEST nvmf_connect_disconnect 00:08:25.524 ************************************ 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:25.524 * Looking for test storage... 00:08:25.524 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.524 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.783 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.784 02:34:28 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:32.356 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:32.356 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:32.356 Found net devices under 0000:18:00.0: mlx_0_0 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:32.356 Found net devices under 0000:18:00.1: mlx_0_1 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:32.356 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:32.356 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:08:32.356 altname enp24s0f0np0 00:08:32.356 altname ens785f0np0 00:08:32.356 inet 192.168.100.8/24 scope global mlx_0_0 00:08:32.356 valid_lft forever preferred_lft forever 00:08:32.356 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:32.357 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:32.357 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:08:32.357 altname enp24s0f1np1 00:08:32.357 altname ens785f1np1 00:08:32.357 inet 192.168.100.9/24 scope global mlx_0_1 00:08:32.357 valid_lft forever preferred_lft forever 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:32.357 192.168.100.9' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:32.357 192.168.100.9' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:32.357 192.168.100.9' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=698833 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 698833 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # '[' -z 698833 ']' 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:32.357 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.616 [2024-05-15 02:34:35.650510] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:08:32.616 [2024-05-15 02:34:35.650583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.616 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.616 [2024-05-15 02:34:35.758188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.616 [2024-05-15 02:34:35.809744] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.616 [2024-05-15 02:34:35.809793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.616 [2024-05-15 02:34:35.809809] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.616 [2024-05-15 02:34:35.809823] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.616 [2024-05-15 02:34:35.809834] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.616 [2024-05-15 02:34:35.809913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.616 [2024-05-15 02:34:35.809966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.616 [2024-05-15 02:34:35.810069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.616 [2024-05-15 02:34:35.810069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.875 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:32.875 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@861 -- # return 0 00:08:32.875 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.875 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:32.875 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.875 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.875 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:32.875 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:32.875 02:34:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.875 [2024-05-15 02:34:35.980592] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:32.875 [2024-05-15 02:34:36.009052] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9e8d70/0x9ed260) succeed. 00:08:32.875 [2024-05-15 02:34:36.023947] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9ea3b0/0xa2e8f0) succeed. 00:08:32.875 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:32.875 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:32.875 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:32.875 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.134 [2024-05-15 02:34:36.195408] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:33.134 [2024-05-15 02:34:36.195826] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:33.134 02:34:36 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:36.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:47.500 rmmod nvme_rdma 00:13:47.500 rmmod nvme_fabrics 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 698833 ']' 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 698833 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' -z 698833 ']' 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # kill -0 698833 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # uname 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 698833 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:47.500 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 698833' 00:13:47.500 killing process with pid 698833 00:13:47.501 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # kill 698833 00:13:47.501 [2024-05-15 02:39:50.283071] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:47.501 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # wait 698833 00:13:47.501 [2024-05-15 02:39:50.350753] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:47.501 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:47.501 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:47.501 00:13:47.501 real 5m21.890s 00:13:47.501 user 20m54.783s 00:13:47.501 sys 0m17.261s 00:13:47.501 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:47.501 02:39:50 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:47.501 ************************************ 00:13:47.501 END TEST nvmf_connect_disconnect 00:13:47.501 ************************************ 00:13:47.501 02:39:50 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:13:47.501 02:39:50 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:47.501 02:39:50 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:47.501 02:39:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:47.501 ************************************ 00:13:47.501 START TEST nvmf_multitarget 00:13:47.501 ************************************ 00:13:47.501 02:39:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:13:47.501 * Looking for test storage... 00:13:47.501 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:47.501 02:39:50 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.501 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.764 02:39:50 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:47.765 02:39:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.335 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:54.336 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:54.336 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:54.336 Found net devices under 0000:18:00.0: mlx_0_0 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:54.336 Found net devices under 0000:18:00.1: mlx_0_1 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:54.336 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:54.336 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:13:54.336 altname enp24s0f0np0 00:13:54.336 altname ens785f0np0 00:13:54.336 inet 192.168.100.8/24 scope global mlx_0_0 00:13:54.336 valid_lft forever preferred_lft forever 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:54.336 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:54.336 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:13:54.336 altname enp24s0f1np1 00:13:54.336 altname ens785f1np1 00:13:54.336 inet 192.168.100.9/24 scope global mlx_0_1 00:13:54.336 valid_lft forever preferred_lft forever 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:54.336 02:39:56 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:54.336 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:54.336 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:54.336 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:54.336 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:54.336 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:54.336 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:54.336 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.336 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:54.336 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:54.337 192.168.100.9' 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:54.337 192.168.100.9' 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:54.337 192.168.100.9' 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=745363 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 745363 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@828 -- # '[' -z 745363 ']' 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:54.337 [2024-05-15 02:39:57.163394] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:13:54.337 [2024-05-15 02:39:57.163459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.337 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.337 [2024-05-15 02:39:57.270055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.337 [2024-05-15 02:39:57.319828] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.337 [2024-05-15 02:39:57.319876] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.337 [2024-05-15 02:39:57.319890] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.337 [2024-05-15 02:39:57.319907] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.337 [2024-05-15 02:39:57.319919] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.337 [2024-05-15 02:39:57.319983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.337 [2024-05-15 02:39:57.320004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.337 [2024-05-15 02:39:57.320116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.337 [2024-05-15 02:39:57.320116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@861 -- # return 0 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:54.337 02:39:57 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:54.596 "nvmf_tgt_1" 00:13:54.596 02:39:57 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:54.596 "nvmf_tgt_2" 00:13:54.596 02:39:57 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:54.596 02:39:57 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:54.855 02:39:58 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:54.855 02:39:58 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:54.855 true 00:13:54.855 02:39:58 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:55.114 true 00:13:55.114 02:39:58 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:55.114 02:39:58 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:55.373 rmmod nvme_rdma 00:13:55.373 rmmod nvme_fabrics 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 745363 ']' 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 745363 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' -z 745363 ']' 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@951 -- # kill -0 745363 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # uname 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 745363 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@965 -- # echo 'killing process with pid 745363' 00:13:55.373 killing process with pid 745363 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@966 -- # kill 745363 00:13:55.373 02:39:58 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@971 -- # wait 745363 00:13:55.632 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:55.632 02:39:58 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:55.632 00:13:55.632 real 0m8.000s 00:13:55.632 user 0m8.004s 00:13:55.632 sys 0m5.374s 00:13:55.632 02:39:58 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:55.632 02:39:58 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:55.632 ************************************ 00:13:55.632 END TEST nvmf_multitarget 00:13:55.632 ************************************ 00:13:55.632 02:39:58 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:13:55.632 02:39:58 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:55.632 02:39:58 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:55.632 02:39:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:55.632 ************************************ 00:13:55.632 START TEST nvmf_rpc 00:13:55.632 ************************************ 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:13:55.632 * Looking for test storage... 00:13:55.632 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.632 02:39:58 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:55.633 02:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:02.201 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:02.201 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:02.201 Found net devices under 0000:18:00.0: mlx_0_0 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:02.201 Found net devices under 0000:18:00.1: mlx_0_1 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:02.201 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:02.202 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:02.202 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:14:02.202 altname enp24s0f0np0 00:14:02.202 altname ens785f0np0 00:14:02.202 inet 192.168.100.8/24 scope global mlx_0_0 00:14:02.202 valid_lft forever preferred_lft forever 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:02.202 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:02.202 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:14:02.202 altname enp24s0f1np1 00:14:02.202 altname ens785f1np1 00:14:02.202 inet 192.168.100.9/24 scope global mlx_0_1 00:14:02.202 valid_lft forever preferred_lft forever 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:02.202 192.168.100.9' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:02.202 192.168.100.9' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:02.202 192.168.100.9' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=748446 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 748446 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@828 -- # '[' -z 748446 ']' 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.202 02:40:04 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.202 [2024-05-15 02:40:04.929764] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:14:02.202 [2024-05-15 02:40:04.929832] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.202 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.202 [2024-05-15 02:40:05.037796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.202 [2024-05-15 02:40:05.089911] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.202 [2024-05-15 02:40:05.089961] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.202 [2024-05-15 02:40:05.089975] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.202 [2024-05-15 02:40:05.089988] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.202 [2024-05-15 02:40:05.089999] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.202 [2024-05-15 02:40:05.090059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.202 [2024-05-15 02:40:05.090141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.202 [2024-05-15 02:40:05.090242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.202 [2024-05-15 02:40:05.090242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.202 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:02.202 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@861 -- # return 0 00:14:02.202 02:40:05 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.202 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:02.202 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.202 02:40:05 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.202 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:02.202 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:02.203 "tick_rate": 2300000000, 00:14:02.203 "poll_groups": [ 00:14:02.203 { 00:14:02.203 "name": "nvmf_tgt_poll_group_000", 00:14:02.203 "admin_qpairs": 0, 00:14:02.203 "io_qpairs": 0, 00:14:02.203 "current_admin_qpairs": 0, 00:14:02.203 "current_io_qpairs": 0, 00:14:02.203 "pending_bdev_io": 0, 00:14:02.203 "completed_nvme_io": 0, 00:14:02.203 "transports": [] 00:14:02.203 }, 00:14:02.203 { 00:14:02.203 "name": "nvmf_tgt_poll_group_001", 00:14:02.203 "admin_qpairs": 0, 00:14:02.203 "io_qpairs": 0, 00:14:02.203 "current_admin_qpairs": 0, 00:14:02.203 "current_io_qpairs": 0, 00:14:02.203 "pending_bdev_io": 0, 00:14:02.203 "completed_nvme_io": 0, 00:14:02.203 "transports": [] 00:14:02.203 }, 00:14:02.203 { 00:14:02.203 "name": "nvmf_tgt_poll_group_002", 00:14:02.203 "admin_qpairs": 0, 00:14:02.203 "io_qpairs": 0, 00:14:02.203 "current_admin_qpairs": 0, 00:14:02.203 "current_io_qpairs": 0, 00:14:02.203 "pending_bdev_io": 0, 00:14:02.203 "completed_nvme_io": 0, 00:14:02.203 "transports": [] 00:14:02.203 }, 00:14:02.203 { 00:14:02.203 "name": "nvmf_tgt_poll_group_003", 00:14:02.203 "admin_qpairs": 0, 00:14:02.203 "io_qpairs": 0, 00:14:02.203 "current_admin_qpairs": 0, 00:14:02.203 "current_io_qpairs": 0, 00:14:02.203 "pending_bdev_io": 0, 00:14:02.203 "completed_nvme_io": 0, 00:14:02.203 "transports": [] 00:14:02.203 } 00:14:02.203 ] 00:14:02.203 }' 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.203 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.203 [2024-05-15 02:40:05.406452] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ca4de0/0x1ca92d0) succeed. 00:14:02.203 [2024-05-15 02:40:05.421594] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ca6420/0x1cea960) succeed. 00:14:02.463 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.463 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:02.463 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.463 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.463 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.463 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:02.463 "tick_rate": 2300000000, 00:14:02.463 "poll_groups": [ 00:14:02.463 { 00:14:02.463 "name": "nvmf_tgt_poll_group_000", 00:14:02.463 "admin_qpairs": 0, 00:14:02.463 "io_qpairs": 0, 00:14:02.463 "current_admin_qpairs": 0, 00:14:02.463 "current_io_qpairs": 0, 00:14:02.463 "pending_bdev_io": 0, 00:14:02.463 "completed_nvme_io": 0, 00:14:02.463 "transports": [ 00:14:02.463 { 00:14:02.463 "trtype": "RDMA", 00:14:02.463 "pending_data_buffer": 0, 00:14:02.463 "devices": [ 00:14:02.463 { 00:14:02.463 "name": "mlx5_0", 00:14:02.463 "polls": 15296, 00:14:02.463 "idle_polls": 15296, 00:14:02.463 "completions": 0, 00:14:02.463 "requests": 0, 00:14:02.463 "request_latency": 0, 00:14:02.463 "pending_free_request": 0, 00:14:02.463 "pending_rdma_read": 0, 00:14:02.463 "pending_rdma_write": 0, 00:14:02.463 "pending_rdma_send": 0, 00:14:02.463 "total_send_wrs": 0, 00:14:02.463 "send_doorbell_updates": 0, 00:14:02.463 "total_recv_wrs": 4096, 00:14:02.463 "recv_doorbell_updates": 1 00:14:02.463 }, 00:14:02.463 { 00:14:02.463 "name": "mlx5_1", 00:14:02.463 "polls": 15296, 00:14:02.463 "idle_polls": 15296, 00:14:02.463 "completions": 0, 00:14:02.463 "requests": 0, 00:14:02.463 "request_latency": 0, 00:14:02.463 "pending_free_request": 0, 00:14:02.463 "pending_rdma_read": 0, 00:14:02.463 "pending_rdma_write": 0, 00:14:02.463 "pending_rdma_send": 0, 00:14:02.463 "total_send_wrs": 0, 00:14:02.463 "send_doorbell_updates": 0, 00:14:02.463 "total_recv_wrs": 4096, 00:14:02.463 "recv_doorbell_updates": 1 00:14:02.463 } 00:14:02.463 ] 00:14:02.463 } 00:14:02.463 ] 00:14:02.463 }, 00:14:02.463 { 00:14:02.463 "name": "nvmf_tgt_poll_group_001", 00:14:02.463 "admin_qpairs": 0, 00:14:02.463 "io_qpairs": 0, 00:14:02.463 "current_admin_qpairs": 0, 00:14:02.463 "current_io_qpairs": 0, 00:14:02.463 "pending_bdev_io": 0, 00:14:02.463 "completed_nvme_io": 0, 00:14:02.463 "transports": [ 00:14:02.463 { 00:14:02.463 "trtype": "RDMA", 00:14:02.463 "pending_data_buffer": 0, 00:14:02.463 "devices": [ 00:14:02.463 { 00:14:02.463 "name": "mlx5_0", 00:14:02.463 "polls": 12656, 00:14:02.463 "idle_polls": 12656, 00:14:02.463 "completions": 0, 00:14:02.463 "requests": 0, 00:14:02.463 "request_latency": 0, 00:14:02.463 "pending_free_request": 0, 00:14:02.463 "pending_rdma_read": 0, 00:14:02.463 "pending_rdma_write": 0, 00:14:02.463 "pending_rdma_send": 0, 00:14:02.463 "total_send_wrs": 0, 00:14:02.463 "send_doorbell_updates": 0, 00:14:02.463 "total_recv_wrs": 4096, 00:14:02.463 "recv_doorbell_updates": 1 00:14:02.463 }, 00:14:02.463 { 00:14:02.463 "name": "mlx5_1", 00:14:02.463 "polls": 12656, 00:14:02.463 "idle_polls": 12656, 00:14:02.463 "completions": 0, 00:14:02.463 "requests": 0, 00:14:02.463 "request_latency": 0, 00:14:02.463 "pending_free_request": 0, 00:14:02.463 "pending_rdma_read": 0, 00:14:02.463 "pending_rdma_write": 0, 00:14:02.463 "pending_rdma_send": 0, 00:14:02.463 "total_send_wrs": 0, 00:14:02.463 "send_doorbell_updates": 0, 00:14:02.463 "total_recv_wrs": 4096, 00:14:02.463 "recv_doorbell_updates": 1 00:14:02.463 } 00:14:02.463 ] 00:14:02.463 } 00:14:02.463 ] 00:14:02.463 }, 00:14:02.463 { 00:14:02.463 "name": "nvmf_tgt_poll_group_002", 00:14:02.463 "admin_qpairs": 0, 00:14:02.463 "io_qpairs": 0, 00:14:02.463 "current_admin_qpairs": 0, 00:14:02.463 "current_io_qpairs": 0, 00:14:02.463 "pending_bdev_io": 0, 00:14:02.463 "completed_nvme_io": 0, 00:14:02.463 "transports": [ 00:14:02.463 { 00:14:02.463 "trtype": "RDMA", 00:14:02.463 "pending_data_buffer": 0, 00:14:02.463 "devices": [ 00:14:02.463 { 00:14:02.463 "name": "mlx5_0", 00:14:02.463 "polls": 5765, 00:14:02.463 "idle_polls": 5765, 00:14:02.463 "completions": 0, 00:14:02.463 "requests": 0, 00:14:02.463 "request_latency": 0, 00:14:02.463 "pending_free_request": 0, 00:14:02.463 "pending_rdma_read": 0, 00:14:02.463 "pending_rdma_write": 0, 00:14:02.463 "pending_rdma_send": 0, 00:14:02.463 "total_send_wrs": 0, 00:14:02.463 "send_doorbell_updates": 0, 00:14:02.463 "total_recv_wrs": 4096, 00:14:02.463 "recv_doorbell_updates": 1 00:14:02.463 }, 00:14:02.463 { 00:14:02.463 "name": "mlx5_1", 00:14:02.463 "polls": 5765, 00:14:02.463 "idle_polls": 5765, 00:14:02.463 "completions": 0, 00:14:02.463 "requests": 0, 00:14:02.463 "request_latency": 0, 00:14:02.463 "pending_free_request": 0, 00:14:02.463 "pending_rdma_read": 0, 00:14:02.463 "pending_rdma_write": 0, 00:14:02.463 "pending_rdma_send": 0, 00:14:02.463 "total_send_wrs": 0, 00:14:02.463 "send_doorbell_updates": 0, 00:14:02.463 "total_recv_wrs": 4096, 00:14:02.463 "recv_doorbell_updates": 1 00:14:02.463 } 00:14:02.463 ] 00:14:02.463 } 00:14:02.463 ] 00:14:02.463 }, 00:14:02.463 { 00:14:02.463 "name": "nvmf_tgt_poll_group_003", 00:14:02.463 "admin_qpairs": 0, 00:14:02.463 "io_qpairs": 0, 00:14:02.463 "current_admin_qpairs": 0, 00:14:02.463 "current_io_qpairs": 0, 00:14:02.463 "pending_bdev_io": 0, 00:14:02.463 "completed_nvme_io": 0, 00:14:02.463 "transports": [ 00:14:02.463 { 00:14:02.463 "trtype": "RDMA", 00:14:02.463 "pending_data_buffer": 0, 00:14:02.463 "devices": [ 00:14:02.463 { 00:14:02.463 "name": "mlx5_0", 00:14:02.463 "polls": 721, 00:14:02.463 "idle_polls": 721, 00:14:02.463 "completions": 0, 00:14:02.463 "requests": 0, 00:14:02.463 "request_latency": 0, 00:14:02.463 "pending_free_request": 0, 00:14:02.463 "pending_rdma_read": 0, 00:14:02.463 "pending_rdma_write": 0, 00:14:02.463 "pending_rdma_send": 0, 00:14:02.463 "total_send_wrs": 0, 00:14:02.463 "send_doorbell_updates": 0, 00:14:02.463 "total_recv_wrs": 4096, 00:14:02.463 "recv_doorbell_updates": 1 00:14:02.463 }, 00:14:02.463 { 00:14:02.463 "name": "mlx5_1", 00:14:02.463 "polls": 721, 00:14:02.463 "idle_polls": 721, 00:14:02.463 "completions": 0, 00:14:02.463 "requests": 0, 00:14:02.463 "request_latency": 0, 00:14:02.463 "pending_free_request": 0, 00:14:02.463 "pending_rdma_read": 0, 00:14:02.463 "pending_rdma_write": 0, 00:14:02.463 "pending_rdma_send": 0, 00:14:02.463 "total_send_wrs": 0, 00:14:02.463 "send_doorbell_updates": 0, 00:14:02.463 "total_recv_wrs": 4096, 00:14:02.463 "recv_doorbell_updates": 1 00:14:02.463 } 00:14:02.463 ] 00:14:02.463 } 00:14:02.463 ] 00:14:02.463 } 00:14:02.464 ] 00:14:02.464 }' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:02.464 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.724 Malloc1 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.724 [2024-05-15 02:40:05.900180] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:02.724 [2024-05-15 02:40:05.900581] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -a 192.168.100.8 -s 4420 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -a 192.168.100.8 -s 4420 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -a 192.168.100.8 -s 4420 00:14:02.724 [2024-05-15 02:40:05.946180] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e' 00:14:02.724 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:02.724 could not add new controller: failed to write to nvme-fabrics device 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:02.724 02:40:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:04.101 02:40:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:04.101 02:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:04.101 02:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.101 02:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:04.101 02:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:06.042 02:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:06.042 02:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:06.042 02:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.042 02:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:06.042 02:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.042 02:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:06.042 02:40:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:14:06.981 02:40:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:06.981 [2024-05-15 02:40:10.028412] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e' 00:14:06.981 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:06.981 could not add new controller: failed to write to nvme-fabrics device 00:14:06.981 02:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:14:06.981 02:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:06.981 02:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:06.981 02:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:06.981 02:40:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:06.981 02:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:06.981 02:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.981 02:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:06.981 02:40:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:07.916 02:40:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:07.917 02:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:07.917 02:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.917 02:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:07.917 02:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:09.822 02:40:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:09.822 02:40:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:09.822 02:40:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.822 02:40:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:09.822 02:40:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.822 02:40:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:09.822 02:40:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.756 02:40:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:10.756 02:40:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:10.756 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:10.756 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.756 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:10.756 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.756 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:10.756 02:40:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:10.756 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.756 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.756 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.014 [2024-05-15 02:40:14.073137] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.014 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.015 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.015 02:40:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:11.015 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.015 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.015 02:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.015 02:40:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:11.950 02:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:11.950 02:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:11.950 02:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.950 02:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:11.950 02:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:13.852 02:40:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:13.852 02:40:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:13.852 02:40:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.852 02:40:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:13.852 02:40:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.852 02:40:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:13.852 02:40:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.791 02:40:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.791 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:14.791 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:14.791 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.791 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:14.791 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.791 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:14.791 02:40:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.791 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.791 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.089 [2024-05-15 02:40:18.107661] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.089 02:40:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:16.022 02:40:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.022 02:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:16.022 02:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.022 02:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:16.022 02:40:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:17.924 02:40:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:17.924 02:40:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.924 02:40:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:17.924 02:40:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:17.924 02:40:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.924 02:40:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:17.924 02:40:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.862 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.863 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.863 02:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:18.863 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.863 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.863 [2024-05-15 02:40:22.137236] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:18.863 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.863 02:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:18.863 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.863 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.863 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.863 02:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:18.863 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.863 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.121 02:40:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.121 02:40:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:20.057 02:40:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:20.057 02:40:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:20.057 02:40:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.057 02:40:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:20.057 02:40:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:21.964 02:40:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:21.964 02:40:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:21.964 02:40:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.964 02:40:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:21.964 02:40:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.964 02:40:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:21.964 02:40:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.903 [2024-05-15 02:40:26.169643] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.903 02:40:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:24.279 02:40:27 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:24.279 02:40:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:24.279 02:40:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.279 02:40:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:24.279 02:40:27 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:26.182 02:40:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:26.182 02:40:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:26.182 02:40:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.182 02:40:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:26.182 02:40:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.182 02:40:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:26.182 02:40:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.119 [2024-05-15 02:40:30.196142] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:27.119 02:40:30 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:28.054 02:40:31 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:28.054 02:40:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:28.054 02:40:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.054 02:40:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:28.054 02:40:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:29.955 02:40:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:29.955 02:40:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:29.955 02:40:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.955 02:40:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:29.955 02:40:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.955 02:40:33 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:29.955 02:40:33 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.890 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.890 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:30.890 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:30.890 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 [2024-05-15 02:40:34.235987] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 [2024-05-15 02:40:34.285192] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 [2024-05-15 02:40:34.337820] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.150 [2024-05-15 02:40:34.386371] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.150 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.150 [2024-05-15 02:40:34.434937] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.408 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:31.408 "tick_rate": 2300000000, 00:14:31.408 "poll_groups": [ 00:14:31.408 { 00:14:31.408 "name": "nvmf_tgt_poll_group_000", 00:14:31.408 "admin_qpairs": 2, 00:14:31.408 "io_qpairs": 27, 00:14:31.408 "current_admin_qpairs": 0, 00:14:31.408 "current_io_qpairs": 0, 00:14:31.408 "pending_bdev_io": 0, 00:14:31.408 "completed_nvme_io": 215, 00:14:31.408 "transports": [ 00:14:31.408 { 00:14:31.408 "trtype": "RDMA", 00:14:31.408 "pending_data_buffer": 0, 00:14:31.408 "devices": [ 00:14:31.408 { 00:14:31.408 "name": "mlx5_0", 00:14:31.408 "polls": 2819542, 00:14:31.408 "idle_polls": 2819083, 00:14:31.408 "completions": 545, 00:14:31.408 "requests": 272, 00:14:31.408 "request_latency": 80041930, 00:14:31.408 "pending_free_request": 0, 00:14:31.408 "pending_rdma_read": 0, 00:14:31.408 "pending_rdma_write": 0, 00:14:31.408 "pending_rdma_send": 0, 00:14:31.408 "total_send_wrs": 487, 00:14:31.408 "send_doorbell_updates": 226, 00:14:31.408 "total_recv_wrs": 4368, 00:14:31.408 "recv_doorbell_updates": 226 00:14:31.408 }, 00:14:31.408 { 00:14:31.409 "name": "mlx5_1", 00:14:31.409 "polls": 2819542, 00:14:31.409 "idle_polls": 2819542, 00:14:31.409 "completions": 0, 00:14:31.409 "requests": 0, 00:14:31.409 "request_latency": 0, 00:14:31.409 "pending_free_request": 0, 00:14:31.409 "pending_rdma_read": 0, 00:14:31.409 "pending_rdma_write": 0, 00:14:31.409 "pending_rdma_send": 0, 00:14:31.409 "total_send_wrs": 0, 00:14:31.409 "send_doorbell_updates": 0, 00:14:31.409 "total_recv_wrs": 4096, 00:14:31.409 "recv_doorbell_updates": 1 00:14:31.409 } 00:14:31.409 ] 00:14:31.409 } 00:14:31.409 ] 00:14:31.409 }, 00:14:31.409 { 00:14:31.409 "name": "nvmf_tgt_poll_group_001", 00:14:31.409 "admin_qpairs": 2, 00:14:31.409 "io_qpairs": 26, 00:14:31.409 "current_admin_qpairs": 0, 00:14:31.409 "current_io_qpairs": 0, 00:14:31.409 "pending_bdev_io": 0, 00:14:31.409 "completed_nvme_io": 82, 00:14:31.409 "transports": [ 00:14:31.409 { 00:14:31.409 "trtype": "RDMA", 00:14:31.409 "pending_data_buffer": 0, 00:14:31.409 "devices": [ 00:14:31.409 { 00:14:31.409 "name": "mlx5_0", 00:14:31.409 "polls": 3290909, 00:14:31.409 "idle_polls": 3290659, 00:14:31.409 "completions": 272, 00:14:31.409 "requests": 136, 00:14:31.409 "request_latency": 23387018, 00:14:31.409 "pending_free_request": 0, 00:14:31.409 "pending_rdma_read": 0, 00:14:31.409 "pending_rdma_write": 0, 00:14:31.409 "pending_rdma_send": 0, 00:14:31.409 "total_send_wrs": 217, 00:14:31.409 "send_doorbell_updates": 124, 00:14:31.409 "total_recv_wrs": 4232, 00:14:31.409 "recv_doorbell_updates": 125 00:14:31.409 }, 00:14:31.409 { 00:14:31.409 "name": "mlx5_1", 00:14:31.409 "polls": 3290909, 00:14:31.409 "idle_polls": 3290909, 00:14:31.409 "completions": 0, 00:14:31.409 "requests": 0, 00:14:31.409 "request_latency": 0, 00:14:31.409 "pending_free_request": 0, 00:14:31.409 "pending_rdma_read": 0, 00:14:31.409 "pending_rdma_write": 0, 00:14:31.409 "pending_rdma_send": 0, 00:14:31.409 "total_send_wrs": 0, 00:14:31.409 "send_doorbell_updates": 0, 00:14:31.409 "total_recv_wrs": 4096, 00:14:31.409 "recv_doorbell_updates": 1 00:14:31.409 } 00:14:31.409 ] 00:14:31.409 } 00:14:31.409 ] 00:14:31.409 }, 00:14:31.409 { 00:14:31.409 "name": "nvmf_tgt_poll_group_002", 00:14:31.409 "admin_qpairs": 1, 00:14:31.409 "io_qpairs": 26, 00:14:31.409 "current_admin_qpairs": 0, 00:14:31.409 "current_io_qpairs": 0, 00:14:31.409 "pending_bdev_io": 0, 00:14:31.409 "completed_nvme_io": 32, 00:14:31.409 "transports": [ 00:14:31.409 { 00:14:31.409 "trtype": "RDMA", 00:14:31.409 "pending_data_buffer": 0, 00:14:31.409 "devices": [ 00:14:31.409 { 00:14:31.409 "name": "mlx5_0", 00:14:31.409 "polls": 2912100, 00:14:31.409 "idle_polls": 2911977, 00:14:31.409 "completions": 123, 00:14:31.409 "requests": 61, 00:14:31.409 "request_latency": 8626810, 00:14:31.409 "pending_free_request": 0, 00:14:31.409 "pending_rdma_read": 0, 00:14:31.409 "pending_rdma_write": 0, 00:14:31.409 "pending_rdma_send": 0, 00:14:31.409 "total_send_wrs": 81, 00:14:31.409 "send_doorbell_updates": 62, 00:14:31.409 "total_recv_wrs": 4157, 00:14:31.409 "recv_doorbell_updates": 62 00:14:31.409 }, 00:14:31.409 { 00:14:31.409 "name": "mlx5_1", 00:14:31.409 "polls": 2912100, 00:14:31.409 "idle_polls": 2912100, 00:14:31.409 "completions": 0, 00:14:31.409 "requests": 0, 00:14:31.409 "request_latency": 0, 00:14:31.409 "pending_free_request": 0, 00:14:31.409 "pending_rdma_read": 0, 00:14:31.409 "pending_rdma_write": 0, 00:14:31.409 "pending_rdma_send": 0, 00:14:31.409 "total_send_wrs": 0, 00:14:31.409 "send_doorbell_updates": 0, 00:14:31.409 "total_recv_wrs": 4096, 00:14:31.409 "recv_doorbell_updates": 1 00:14:31.409 } 00:14:31.409 ] 00:14:31.409 } 00:14:31.409 ] 00:14:31.409 }, 00:14:31.409 { 00:14:31.409 "name": "nvmf_tgt_poll_group_003", 00:14:31.409 "admin_qpairs": 2, 00:14:31.409 "io_qpairs": 26, 00:14:31.409 "current_admin_qpairs": 0, 00:14:31.409 "current_io_qpairs": 0, 00:14:31.409 "pending_bdev_io": 0, 00:14:31.409 "completed_nvme_io": 126, 00:14:31.409 "transports": [ 00:14:31.409 { 00:14:31.409 "trtype": "RDMA", 00:14:31.409 "pending_data_buffer": 0, 00:14:31.409 "devices": [ 00:14:31.409 { 00:14:31.409 "name": "mlx5_0", 00:14:31.409 "polls": 2194634, 00:14:31.409 "idle_polls": 2194330, 00:14:31.409 "completions": 364, 00:14:31.409 "requests": 182, 00:14:31.409 "request_latency": 51589784, 00:14:31.409 "pending_free_request": 0, 00:14:31.409 "pending_rdma_read": 0, 00:14:31.409 "pending_rdma_write": 0, 00:14:31.409 "pending_rdma_send": 0, 00:14:31.409 "total_send_wrs": 308, 00:14:31.409 "send_doorbell_updates": 157, 00:14:31.409 "total_recv_wrs": 4278, 00:14:31.409 "recv_doorbell_updates": 158 00:14:31.409 }, 00:14:31.409 { 00:14:31.409 "name": "mlx5_1", 00:14:31.409 "polls": 2194634, 00:14:31.409 "idle_polls": 2194634, 00:14:31.409 "completions": 0, 00:14:31.409 "requests": 0, 00:14:31.409 "request_latency": 0, 00:14:31.409 "pending_free_request": 0, 00:14:31.409 "pending_rdma_read": 0, 00:14:31.409 "pending_rdma_write": 0, 00:14:31.409 "pending_rdma_send": 0, 00:14:31.409 "total_send_wrs": 0, 00:14:31.409 "send_doorbell_updates": 0, 00:14:31.409 "total_recv_wrs": 4096, 00:14:31.409 "recv_doorbell_updates": 1 00:14:31.409 } 00:14:31.409 ] 00:14:31.409 } 00:14:31.409 ] 00:14:31.409 } 00:14:31.409 ] 00:14:31.409 }' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1304 > 0 )) 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:14:31.409 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 163645542 > 0 )) 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:31.668 rmmod nvme_rdma 00:14:31.668 rmmod nvme_fabrics 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 748446 ']' 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 748446 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' -z 748446 ']' 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@951 -- # kill -0 748446 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # uname 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 748446 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 748446' 00:14:31.668 killing process with pid 748446 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@966 -- # kill 748446 00:14:31.668 [2024-05-15 02:40:34.809953] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:31.668 02:40:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@971 -- # wait 748446 00:14:31.668 [2024-05-15 02:40:34.919472] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:31.926 02:40:35 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.926 02:40:35 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:31.926 00:14:31.926 real 0m36.388s 00:14:31.926 user 2m1.464s 00:14:31.926 sys 0m6.303s 00:14:31.926 02:40:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:31.927 02:40:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.927 ************************************ 00:14:31.927 END TEST nvmf_rpc 00:14:31.927 ************************************ 00:14:31.927 02:40:35 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:14:31.927 02:40:35 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:31.927 02:40:35 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:31.927 02:40:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:32.184 ************************************ 00:14:32.184 START TEST nvmf_invalid 00:14:32.184 ************************************ 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:14:32.184 * Looking for test storage... 00:14:32.184 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:32.184 02:40:35 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:32.185 02:40:35 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:38.748 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:38.749 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:38.749 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:38.749 Found net devices under 0000:18:00.0: mlx_0_0 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:38.749 Found net devices under 0000:18:00.1: mlx_0_1 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:38.749 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:38.749 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:14:38.749 altname enp24s0f0np0 00:14:38.749 altname ens785f0np0 00:14:38.749 inet 192.168.100.8/24 scope global mlx_0_0 00:14:38.749 valid_lft forever preferred_lft forever 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:38.749 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:38.749 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:14:38.749 altname enp24s0f1np1 00:14:38.749 altname ens785f1np1 00:14:38.749 inet 192.168.100.9/24 scope global mlx_0_1 00:14:38.749 valid_lft forever preferred_lft forever 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:38.749 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:38.750 192.168.100.9' 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:38.750 192.168.100.9' 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:38.750 192.168.100.9' 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=755383 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 755383 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@828 -- # '[' -z 755383 ']' 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:38.750 02:40:41 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.750 [2024-05-15 02:40:41.826977] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:14:38.750 [2024-05-15 02:40:41.827048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.750 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.750 [2024-05-15 02:40:41.937432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.750 [2024-05-15 02:40:41.988100] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.750 [2024-05-15 02:40:41.988151] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.750 [2024-05-15 02:40:41.988166] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.750 [2024-05-15 02:40:41.988179] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.750 [2024-05-15 02:40:41.988190] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.750 [2024-05-15 02:40:41.988261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.750 [2024-05-15 02:40:41.988346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.750 [2024-05-15 02:40:41.988447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.750 [2024-05-15 02:40:41.988446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.008 02:40:42 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:39.008 02:40:42 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@861 -- # return 0 00:14:39.008 02:40:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.008 02:40:42 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:39.008 02:40:42 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:39.008 02:40:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.008 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:39.008 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14824 00:14:39.267 [2024-05-15 02:40:42.384973] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:39.267 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:39.267 { 00:14:39.267 "nqn": "nqn.2016-06.io.spdk:cnode14824", 00:14:39.267 "tgt_name": "foobar", 00:14:39.267 "method": "nvmf_create_subsystem", 00:14:39.267 "req_id": 1 00:14:39.267 } 00:14:39.267 Got JSON-RPC error response 00:14:39.267 response: 00:14:39.267 { 00:14:39.267 "code": -32603, 00:14:39.267 "message": "Unable to find target foobar" 00:14:39.267 }' 00:14:39.267 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:39.267 { 00:14:39.267 "nqn": "nqn.2016-06.io.spdk:cnode14824", 00:14:39.267 "tgt_name": "foobar", 00:14:39.267 "method": "nvmf_create_subsystem", 00:14:39.267 "req_id": 1 00:14:39.267 } 00:14:39.267 Got JSON-RPC error response 00:14:39.267 response: 00:14:39.267 { 00:14:39.267 "code": -32603, 00:14:39.267 "message": "Unable to find target foobar" 00:14:39.267 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:39.267 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:39.267 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3063 00:14:39.525 [2024-05-15 02:40:42.653961] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3063: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:39.525 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:39.525 { 00:14:39.525 "nqn": "nqn.2016-06.io.spdk:cnode3063", 00:14:39.525 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:39.525 "method": "nvmf_create_subsystem", 00:14:39.525 "req_id": 1 00:14:39.525 } 00:14:39.525 Got JSON-RPC error response 00:14:39.525 response: 00:14:39.525 { 00:14:39.526 "code": -32602, 00:14:39.526 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:39.526 }' 00:14:39.526 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:39.526 { 00:14:39.526 "nqn": "nqn.2016-06.io.spdk:cnode3063", 00:14:39.526 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:39.526 "method": "nvmf_create_subsystem", 00:14:39.526 "req_id": 1 00:14:39.526 } 00:14:39.526 Got JSON-RPC error response 00:14:39.526 response: 00:14:39.526 { 00:14:39.526 "code": -32602, 00:14:39.526 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:39.526 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:39.526 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:39.526 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30369 00:14:39.784 [2024-05-15 02:40:42.914820] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30369: invalid model number 'SPDK_Controller' 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:39.784 { 00:14:39.784 "nqn": "nqn.2016-06.io.spdk:cnode30369", 00:14:39.784 "model_number": "SPDK_Controller\u001f", 00:14:39.784 "method": "nvmf_create_subsystem", 00:14:39.784 "req_id": 1 00:14:39.784 } 00:14:39.784 Got JSON-RPC error response 00:14:39.784 response: 00:14:39.784 { 00:14:39.784 "code": -32602, 00:14:39.784 "message": "Invalid MN SPDK_Controller\u001f" 00:14:39.784 }' 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:39.784 { 00:14:39.784 "nqn": "nqn.2016-06.io.spdk:cnode30369", 00:14:39.784 "model_number": "SPDK_Controller\u001f", 00:14:39.784 "method": "nvmf_create_subsystem", 00:14:39.784 "req_id": 1 00:14:39.784 } 00:14:39.784 Got JSON-RPC error response 00:14:39.784 response: 00:14:39.784 { 00:14:39.784 "code": -32602, 00:14:39.784 "message": "Invalid MN SPDK_Controller\u001f" 00:14:39.784 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.784 02:40:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.784 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:39.784 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:39.784 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:39.784 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.784 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.784 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:39.784 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:39.785 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo '5J:[A4\Y]Jy/@/aK$L8Jz' 00:14:40.046 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '5J:[A4\Y]Jy/@/aK$L8Jz' nqn.2016-06.io.spdk:cnode19134 00:14:40.352 [2024-05-15 02:40:43.336382] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19134: invalid serial number '5J:[A4\Y]Jy/@/aK$L8Jz' 00:14:40.352 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:40.352 { 00:14:40.352 "nqn": "nqn.2016-06.io.spdk:cnode19134", 00:14:40.352 "serial_number": "5J:[A4\\Y]Jy/@/aK$L8Jz", 00:14:40.352 "method": "nvmf_create_subsystem", 00:14:40.352 "req_id": 1 00:14:40.352 } 00:14:40.352 Got JSON-RPC error response 00:14:40.352 response: 00:14:40.352 { 00:14:40.352 "code": -32602, 00:14:40.352 "message": "Invalid SN 5J:[A4\\Y]Jy/@/aK$L8Jz" 00:14:40.352 }' 00:14:40.352 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:40.352 { 00:14:40.352 "nqn": "nqn.2016-06.io.spdk:cnode19134", 00:14:40.352 "serial_number": "5J:[A4\\Y]Jy/@/aK$L8Jz", 00:14:40.352 "method": "nvmf_create_subsystem", 00:14:40.352 "req_id": 1 00:14:40.352 } 00:14:40.352 Got JSON-RPC error response 00:14:40.352 response: 00:14:40.352 { 00:14:40.352 "code": -32602, 00:14:40.353 "message": "Invalid SN 5J:[A4\\Y]Jy/@/aK$L8Jz" 00:14:40.353 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.353 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:40.354 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:40.355 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.356 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:40.357 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.358 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.359 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ m == \- ]] 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'm6e*LB$S]oe<7Hrx(1)Por_wE\6%jhf}:lcZ#KU7s' 00:14:40.624 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'm6e*LB$S]oe<7Hrx(1)Por_wE\6%jhf}:lcZ#KU7s' nqn.2016-06.io.spdk:cnode17269 00:14:40.883 [2024-05-15 02:40:43.926369] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17269: invalid model number 'm6e*LB$S]oe<7Hrx(1)Por_wE\6%jhf}:lcZ#KU7s' 00:14:40.883 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:40.883 { 00:14:40.883 "nqn": "nqn.2016-06.io.spdk:cnode17269", 00:14:40.883 "model_number": "m6e*LB$S]oe<7Hrx(1)Por_wE\\6%jhf}:lcZ#KU7s", 00:14:40.883 "method": "nvmf_create_subsystem", 00:14:40.883 "req_id": 1 00:14:40.883 } 00:14:40.883 Got JSON-RPC error response 00:14:40.883 response: 00:14:40.883 { 00:14:40.883 "code": -32602, 00:14:40.883 "message": "Invalid MN m6e*LB$S]oe<7Hrx(1)Por_wE\\6%jhf}:lcZ#KU7s" 00:14:40.883 }' 00:14:40.883 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:40.883 { 00:14:40.883 "nqn": "nqn.2016-06.io.spdk:cnode17269", 00:14:40.883 "model_number": "m6e*LB$S]oe<7Hrx(1)Por_wE\\6%jhf}:lcZ#KU7s", 00:14:40.883 "method": "nvmf_create_subsystem", 00:14:40.883 "req_id": 1 00:14:40.883 } 00:14:40.883 Got JSON-RPC error response 00:14:40.883 response: 00:14:40.883 { 00:14:40.883 "code": -32602, 00:14:40.883 "message": "Invalid MN m6e*LB$S]oe<7Hrx(1)Por_wE\\6%jhf}:lcZ#KU7s" 00:14:40.883 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:40.883 02:40:43 nvmf_rdma.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:14:40.883 [2024-05-15 02:40:44.144098] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcf66d0/0xcfabc0) succeed. 00:14:40.883 [2024-05-15 02:40:44.158847] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcf7d10/0xd3c250) succeed. 00:14:41.142 02:40:44 nvmf_rdma.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:41.401 02:40:44 nvmf_rdma.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:14:41.401 02:40:44 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:14:41.401 192.168.100.9' 00:14:41.401 02:40:44 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:41.401 02:40:44 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:14:41.401 02:40:44 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:14:41.660 [2024-05-15 02:40:44.750639] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:41.660 [2024-05-15 02:40:44.750749] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:41.660 02:40:44 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:41.660 { 00:14:41.660 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:41.660 "listen_address": { 00:14:41.660 "trtype": "rdma", 00:14:41.660 "traddr": "192.168.100.8", 00:14:41.660 "trsvcid": "4421" 00:14:41.660 }, 00:14:41.660 "method": "nvmf_subsystem_remove_listener", 00:14:41.660 "req_id": 1 00:14:41.660 } 00:14:41.660 Got JSON-RPC error response 00:14:41.660 response: 00:14:41.660 { 00:14:41.660 "code": -32602, 00:14:41.660 "message": "Invalid parameters" 00:14:41.660 }' 00:14:41.660 02:40:44 nvmf_rdma.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:41.660 { 00:14:41.660 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:41.660 "listen_address": { 00:14:41.660 "trtype": "rdma", 00:14:41.660 "traddr": "192.168.100.8", 00:14:41.660 "trsvcid": "4421" 00:14:41.660 }, 00:14:41.660 "method": "nvmf_subsystem_remove_listener", 00:14:41.660 "req_id": 1 00:14:41.660 } 00:14:41.660 Got JSON-RPC error response 00:14:41.660 response: 00:14:41.660 { 00:14:41.660 "code": -32602, 00:14:41.660 "message": "Invalid parameters" 00:14:41.660 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:41.660 02:40:44 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15932 -i 0 00:14:41.919 [2024-05-15 02:40:45.011657] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15932: invalid cntlid range [0-65519] 00:14:41.919 02:40:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:41.919 { 00:14:41.919 "nqn": "nqn.2016-06.io.spdk:cnode15932", 00:14:41.919 "min_cntlid": 0, 00:14:41.919 "method": "nvmf_create_subsystem", 00:14:41.919 "req_id": 1 00:14:41.919 } 00:14:41.919 Got JSON-RPC error response 00:14:41.919 response: 00:14:41.919 { 00:14:41.919 "code": -32602, 00:14:41.919 "message": "Invalid cntlid range [0-65519]" 00:14:41.919 }' 00:14:41.919 02:40:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:41.919 { 00:14:41.919 "nqn": "nqn.2016-06.io.spdk:cnode15932", 00:14:41.919 "min_cntlid": 0, 00:14:41.919 "method": "nvmf_create_subsystem", 00:14:41.919 "req_id": 1 00:14:41.919 } 00:14:41.919 Got JSON-RPC error response 00:14:41.919 response: 00:14:41.919 { 00:14:41.919 "code": -32602, 00:14:41.919 "message": "Invalid cntlid range [0-65519]" 00:14:41.919 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:41.919 02:40:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30761 -i 65520 00:14:42.177 [2024-05-15 02:40:45.276730] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30761: invalid cntlid range [65520-65519] 00:14:42.177 02:40:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:42.177 { 00:14:42.177 "nqn": "nqn.2016-06.io.spdk:cnode30761", 00:14:42.177 "min_cntlid": 65520, 00:14:42.177 "method": "nvmf_create_subsystem", 00:14:42.177 "req_id": 1 00:14:42.177 } 00:14:42.177 Got JSON-RPC error response 00:14:42.177 response: 00:14:42.177 { 00:14:42.177 "code": -32602, 00:14:42.177 "message": "Invalid cntlid range [65520-65519]" 00:14:42.177 }' 00:14:42.177 02:40:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:42.177 { 00:14:42.177 "nqn": "nqn.2016-06.io.spdk:cnode30761", 00:14:42.177 "min_cntlid": 65520, 00:14:42.177 "method": "nvmf_create_subsystem", 00:14:42.177 "req_id": 1 00:14:42.177 } 00:14:42.177 Got JSON-RPC error response 00:14:42.177 response: 00:14:42.177 { 00:14:42.177 "code": -32602, 00:14:42.177 "message": "Invalid cntlid range [65520-65519]" 00:14:42.177 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:42.177 02:40:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24133 -I 0 00:14:42.435 [2024-05-15 02:40:45.533727] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24133: invalid cntlid range [1-0] 00:14:42.435 02:40:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:42.435 { 00:14:42.435 "nqn": "nqn.2016-06.io.spdk:cnode24133", 00:14:42.435 "max_cntlid": 0, 00:14:42.435 "method": "nvmf_create_subsystem", 00:14:42.435 "req_id": 1 00:14:42.435 } 00:14:42.435 Got JSON-RPC error response 00:14:42.435 response: 00:14:42.435 { 00:14:42.435 "code": -32602, 00:14:42.435 "message": "Invalid cntlid range [1-0]" 00:14:42.435 }' 00:14:42.435 02:40:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:42.435 { 00:14:42.435 "nqn": "nqn.2016-06.io.spdk:cnode24133", 00:14:42.435 "max_cntlid": 0, 00:14:42.435 "method": "nvmf_create_subsystem", 00:14:42.435 "req_id": 1 00:14:42.435 } 00:14:42.435 Got JSON-RPC error response 00:14:42.435 response: 00:14:42.435 { 00:14:42.435 "code": -32602, 00:14:42.435 "message": "Invalid cntlid range [1-0]" 00:14:42.435 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:42.435 02:40:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19349 -I 65520 00:14:42.694 [2024-05-15 02:40:45.790715] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19349: invalid cntlid range [1-65520] 00:14:42.694 02:40:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:42.694 { 00:14:42.694 "nqn": "nqn.2016-06.io.spdk:cnode19349", 00:14:42.694 "max_cntlid": 65520, 00:14:42.694 "method": "nvmf_create_subsystem", 00:14:42.694 "req_id": 1 00:14:42.694 } 00:14:42.694 Got JSON-RPC error response 00:14:42.694 response: 00:14:42.694 { 00:14:42.694 "code": -32602, 00:14:42.694 "message": "Invalid cntlid range [1-65520]" 00:14:42.694 }' 00:14:42.694 02:40:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:42.694 { 00:14:42.694 "nqn": "nqn.2016-06.io.spdk:cnode19349", 00:14:42.694 "max_cntlid": 65520, 00:14:42.694 "method": "nvmf_create_subsystem", 00:14:42.694 "req_id": 1 00:14:42.694 } 00:14:42.694 Got JSON-RPC error response 00:14:42.694 response: 00:14:42.694 { 00:14:42.694 "code": -32602, 00:14:42.694 "message": "Invalid cntlid range [1-65520]" 00:14:42.694 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:42.694 02:40:45 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25477 -i 6 -I 5 00:14:42.953 [2024-05-15 02:40:46.047763] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25477: invalid cntlid range [6-5] 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:42.953 { 00:14:42.953 "nqn": "nqn.2016-06.io.spdk:cnode25477", 00:14:42.953 "min_cntlid": 6, 00:14:42.953 "max_cntlid": 5, 00:14:42.953 "method": "nvmf_create_subsystem", 00:14:42.953 "req_id": 1 00:14:42.953 } 00:14:42.953 Got JSON-RPC error response 00:14:42.953 response: 00:14:42.953 { 00:14:42.953 "code": -32602, 00:14:42.953 "message": "Invalid cntlid range [6-5]" 00:14:42.953 }' 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:42.953 { 00:14:42.953 "nqn": "nqn.2016-06.io.spdk:cnode25477", 00:14:42.953 "min_cntlid": 6, 00:14:42.953 "max_cntlid": 5, 00:14:42.953 "method": "nvmf_create_subsystem", 00:14:42.953 "req_id": 1 00:14:42.953 } 00:14:42.953 Got JSON-RPC error response 00:14:42.953 response: 00:14:42.953 { 00:14:42.953 "code": -32602, 00:14:42.953 "message": "Invalid cntlid range [6-5]" 00:14:42.953 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:42.953 { 00:14:42.953 "name": "foobar", 00:14:42.953 "method": "nvmf_delete_target", 00:14:42.953 "req_id": 1 00:14:42.953 } 00:14:42.953 Got JSON-RPC error response 00:14:42.953 response: 00:14:42.953 { 00:14:42.953 "code": -32602, 00:14:42.953 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:42.953 }' 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:42.953 { 00:14:42.953 "name": "foobar", 00:14:42.953 "method": "nvmf_delete_target", 00:14:42.953 "req_id": 1 00:14:42.953 } 00:14:42.953 Got JSON-RPC error response 00:14:42.953 response: 00:14:42.953 { 00:14:42.953 "code": -32602, 00:14:42.953 "message": "The specified target doesn't exist, cannot delete it." 00:14:42.953 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:42.953 rmmod nvme_rdma 00:14:42.953 rmmod nvme_fabrics 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 755383 ']' 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 755383 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@947 -- # '[' -z 755383 ']' 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@951 -- # kill -0 755383 00:14:42.953 02:40:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@952 -- # uname 00:14:43.212 02:40:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:43.212 02:40:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 755383 00:14:43.212 02:40:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:43.212 02:40:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:43.212 02:40:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@965 -- # echo 'killing process with pid 755383' 00:14:43.212 killing process with pid 755383 00:14:43.212 02:40:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@966 -- # kill 755383 00:14:43.212 [2024-05-15 02:40:46.290032] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:43.212 02:40:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@971 -- # wait 755383 00:14:43.212 [2024-05-15 02:40:46.402589] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:43.471 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:43.471 02:40:46 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:43.471 00:14:43.471 real 0m11.367s 00:14:43.471 user 0m22.816s 00:14:43.471 sys 0m6.312s 00:14:43.471 02:40:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:43.471 02:40:46 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:43.471 ************************************ 00:14:43.471 END TEST nvmf_invalid 00:14:43.471 ************************************ 00:14:43.471 02:40:46 nvmf_rdma -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:14:43.471 02:40:46 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:43.471 02:40:46 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:43.471 02:40:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:43.471 ************************************ 00:14:43.471 START TEST nvmf_abort 00:14:43.471 ************************************ 00:14:43.471 02:40:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:14:43.730 * Looking for test storage... 00:14:43.730 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:43.730 02:40:46 nvmf_rdma.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.730 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:43.730 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.730 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.730 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.730 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.730 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.730 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.730 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.730 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:14:43.731 02:40:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.300 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:50.301 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:50.301 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:50.301 Found net devices under 0000:18:00.0: mlx_0_0 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:50.301 Found net devices under 0000:18:00.1: mlx_0_1 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:50.301 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:50.301 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:14:50.301 altname enp24s0f0np0 00:14:50.301 altname ens785f0np0 00:14:50.301 inet 192.168.100.8/24 scope global mlx_0_0 00:14:50.301 valid_lft forever preferred_lft forever 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:50.301 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:50.301 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:14:50.301 altname enp24s0f1np1 00:14:50.301 altname ens785f1np1 00:14:50.301 inet 192.168.100.9/24 scope global mlx_0_1 00:14:50.301 valid_lft forever preferred_lft forever 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:50.301 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:50.302 192.168.100.9' 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:50.302 192.168.100.9' 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:50.302 192.168.100.9' 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=759102 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 759102 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@828 -- # '[' -z 759102 ']' 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:50.302 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.302 [2024-05-15 02:40:53.561258] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:14:50.302 [2024-05-15 02:40:53.561331] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.560 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.560 [2024-05-15 02:40:53.664286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:50.560 [2024-05-15 02:40:53.710677] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.560 [2024-05-15 02:40:53.710728] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.560 [2024-05-15 02:40:53.710743] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.560 [2024-05-15 02:40:53.710756] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.560 [2024-05-15 02:40:53.710767] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.560 [2024-05-15 02:40:53.710875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.560 [2024-05-15 02:40:53.710983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.560 [2024-05-15 02:40:53.710984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.560 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:50.560 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@861 -- # return 0 00:14:50.560 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.560 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:50.560 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.818 02:40:53 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.818 02:40:53 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:14:50.818 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.818 02:40:53 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.818 [2024-05-15 02:40:53.893995] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa7a560/0xa7ea50) succeed. 00:14:50.818 [2024-05-15 02:40:53.908610] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa7bb00/0xac00e0) succeed. 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.818 Malloc0 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.818 Delay0 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.818 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.819 [2024-05-15 02:40:54.091029] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:50.819 [2024-05-15 02:40:54.091386] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.819 02:40:54 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:51.079 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.079 [2024-05-15 02:40:54.206396] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:53.617 Initializing NVMe Controllers 00:14:53.617 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:14:53.617 controller IO queue size 128 less than required 00:14:53.617 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:53.617 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:53.617 Initialization complete. Launching workers. 00:14:53.617 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34021 00:14:53.617 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34082, failed to submit 62 00:14:53.617 success 34022, unsuccess 60, failed 0 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:53.617 rmmod nvme_rdma 00:14:53.617 rmmod nvme_fabrics 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 759102 ']' 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 759102 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@947 -- # '[' -z 759102 ']' 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@951 -- # kill -0 759102 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # uname 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 759102 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 759102' 00:14:53.617 killing process with pid 759102 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@966 -- # kill 759102 00:14:53.617 [2024-05-15 02:40:56.454804] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@971 -- # wait 759102 00:14:53.617 [2024-05-15 02:40:56.543150] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:53.617 00:14:53.617 real 0m10.071s 00:14:53.617 user 0m13.027s 00:14:53.617 sys 0m5.628s 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:53.617 02:40:56 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:53.617 ************************************ 00:14:53.617 END TEST nvmf_abort 00:14:53.617 ************************************ 00:14:53.617 02:40:56 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:14:53.617 02:40:56 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:53.617 02:40:56 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:53.617 02:40:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:53.617 ************************************ 00:14:53.617 START TEST nvmf_ns_hotplug_stress 00:14:53.617 ************************************ 00:14:53.617 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:14:53.876 * Looking for test storage... 00:14:53.876 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.876 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:53.877 02:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:00.446 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:00.446 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:00.446 Found net devices under 0000:18:00.0: mlx_0_0 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:00.446 Found net devices under 0000:18:00.1: mlx_0_1 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:00.446 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:00.446 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:15:00.446 altname enp24s0f0np0 00:15:00.446 altname ens785f0np0 00:15:00.446 inet 192.168.100.8/24 scope global mlx_0_0 00:15:00.446 valid_lft forever preferred_lft forever 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:00.446 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:00.446 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:00.446 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:15:00.446 altname enp24s0f1np1 00:15:00.446 altname ens785f1np1 00:15:00.446 inet 192.168.100.9/24 scope global mlx_0_1 00:15:00.446 valid_lft forever preferred_lft forever 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:00.447 192.168.100.9' 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:00.447 192.168.100.9' 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:00.447 192.168.100.9' 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=762451 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 762451 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # '[' -z 762451 ']' 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:00.447 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.447 [2024-05-15 02:41:03.503052] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:15:00.447 [2024-05-15 02:41:03.503111] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.447 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.447 [2024-05-15 02:41:03.589005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:00.447 [2024-05-15 02:41:03.635129] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.447 [2024-05-15 02:41:03.635187] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.447 [2024-05-15 02:41:03.635202] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.447 [2024-05-15 02:41:03.635215] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.447 [2024-05-15 02:41:03.635226] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.447 [2024-05-15 02:41:03.635332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.447 [2024-05-15 02:41:03.635439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.447 [2024-05-15 02:41:03.635439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.706 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:00.706 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # return 0 00:15:00.706 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:00.706 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:00.706 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:00.706 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.706 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:00.706 02:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:00.965 [2024-05-15 02:41:04.034838] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xea9560/0xeada50) succeed. 00:15:00.965 [2024-05-15 02:41:04.049447] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xeaab00/0xeef0e0) succeed. 00:15:00.965 02:41:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:01.225 02:41:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:01.484 [2024-05-15 02:41:04.561843] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:01.484 [2024-05-15 02:41:04.562272] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:01.484 02:41:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:01.743 02:41:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:02.002 Malloc0 00:15:02.002 02:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:02.002 Delay0 00:15:02.002 02:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.260 02:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:02.519 NULL1 00:15:02.519 02:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:02.778 02:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=762756 00:15:02.778 02:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:02.778 02:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:02.778 02:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.778 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.157 Read completed with error (sct=0, sc=11) 00:15:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:04.157 02:41:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:04.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:04.416 02:41:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:04.416 02:41:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:04.416 true 00:15:04.675 02:41:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:04.675 02:41:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:05.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:05.243 02:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:05.243 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:05.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:05.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:05.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:05.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:05.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:05.502 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:05.502 02:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:05.502 02:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:05.761 true 00:15:05.761 02:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:05.761 02:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:06.728 02:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:06.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:06.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:06.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:06.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:06.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:06.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:06.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:06.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:06.728 02:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:06.728 02:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:06.987 true 00:15:06.987 02:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:06.987 02:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:07.924 02:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:07.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:07.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:07.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:07.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:07.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:07.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:07.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:07.924 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:08.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:08.182 02:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:08.182 02:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:08.440 true 00:15:08.440 02:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:08.441 02:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:09.008 02:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.267 02:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:09.267 02:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:09.525 true 00:15:09.525 02:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:09.525 02:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.783 02:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.046 02:41:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:10.046 02:41:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:10.312 true 00:15:10.312 02:41:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:10.312 02:41:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.251 02:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.509 02:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:11.509 02:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:11.509 true 00:15:11.509 02:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:11.510 02:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.768 02:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:12.026 02:41:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:12.026 02:41:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:12.285 true 00:15:12.285 02:41:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:12.285 02:41:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.222 02:41:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.481 02:41:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:13.481 02:41:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:13.740 true 00:15:13.740 02:41:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:13.740 02:41:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.998 02:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:13.998 02:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:13.998 02:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:14.257 true 00:15:14.257 02:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:14.257 02:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.193 02:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.452 02:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:15.452 02:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:15.452 true 00:15:15.452 02:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:15.452 02:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.712 02:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:15.971 02:41:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:15.971 02:41:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:16.230 true 00:15:16.230 02:41:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:16.230 02:41:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:17.167 02:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:17.425 02:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:17.425 02:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:17.683 true 00:15:17.683 02:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:17.683 02:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.941 02:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:18.200 02:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:18.200 02:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:18.459 true 00:15:18.459 02:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:18.459 02:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.394 02:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:19.394 02:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:19.394 02:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:19.652 true 00:15:19.652 02:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:19.652 02:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.911 02:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:20.170 02:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:20.170 02:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:20.427 true 00:15:20.428 02:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:20.428 02:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.685 02:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:20.943 02:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:20.943 02:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:21.201 true 00:15:21.201 02:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:21.201 02:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.136 02:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:22.394 02:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:22.394 02:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:22.652 true 00:15:22.652 02:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:22.652 02:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.586 02:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:23.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:23.587 02:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:23.587 02:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:23.845 true 00:15:23.845 02:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:23.845 02:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.779 02:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:24.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:24.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:25.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:25.065 02:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:25.066 02:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:25.066 true 00:15:25.324 02:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:25.324 02:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:25.892 02:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:25.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:26.151 02:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:26.151 02:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:26.410 true 00:15:26.410 02:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:26.410 02:41:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.345 02:41:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:27.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:27.345 02:41:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:27.345 02:41:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:27.604 true 00:15:27.604 02:41:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:27.604 02:41:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.541 02:41:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:28.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:28.799 02:41:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:28.799 02:41:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:29.057 true 00:15:29.057 02:41:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:29.057 02:41:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:29.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.883 02:41:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:29.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:29.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.142 02:41:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:30.142 02:41:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:30.142 true 00:15:30.400 02:41:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:30.400 02:41:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.967 02:41:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:30.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:30.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.226 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:31.226 02:41:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:31.226 02:41:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:31.484 true 00:15:31.484 02:41:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:31.484 02:41:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:32.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.422 02:41:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:32.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:32.682 02:41:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:32.682 02:41:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:32.682 true 00:15:32.682 02:41:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:32.682 02:41:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.618 02:41:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:33.877 02:41:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:33.877 02:41:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:33.877 true 00:15:33.877 02:41:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:33.877 02:41:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.136 02:41:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:34.395 02:41:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:34.395 02:41:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:34.654 true 00:15:34.654 02:41:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:34.654 02:41:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.913 02:41:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.172 02:41:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:35.172 02:41:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:35.172 true 00:15:35.172 02:41:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:35.172 02:41:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.435 02:41:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:35.697 Initializing NVMe Controllers 00:15:35.697 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:35.697 Controller IO queue size 128, less than required. 00:15:35.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:35.697 Controller IO queue size 128, less than required. 00:15:35.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:35.697 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:35.697 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:35.697 Initialization complete. Launching workers. 00:15:35.697 ======================================================== 00:15:35.697 Latency(us) 00:15:35.697 Device Information : IOPS MiB/s Average min max 00:15:35.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3632.03 1.77 24975.78 1387.92 1204206.37 00:15:35.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18284.14 8.93 7000.73 2011.74 436560.15 00:15:35.697 ======================================================== 00:15:35.697 Total : 21916.17 10.70 9979.63 1387.92 1204206.37 00:15:35.697 00:15:35.697 02:41:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:15:35.697 02:41:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:15:35.956 true 00:15:35.956 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 762756 00:15:35.956 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (762756) - No such process 00:15:35.956 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 762756 00:15:35.956 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.214 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:36.473 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:36.473 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:36.473 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:36.473 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:36.473 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:36.473 null0 00:15:36.473 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:36.473 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:36.473 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:36.731 null1 00:15:36.731 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:36.731 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:36.731 02:41:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:36.990 null2 00:15:36.990 02:41:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:36.990 02:41:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:36.990 02:41:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:37.250 null3 00:15:37.250 02:41:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:37.250 02:41:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:37.250 02:41:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:37.509 null4 00:15:37.509 02:41:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:37.509 02:41:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:37.509 02:41:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:37.768 null5 00:15:37.768 02:41:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:37.768 02:41:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:37.768 02:41:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:38.125 null6 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:38.125 null7 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 767463 767464 767467 767468 767470 767472 767474 767476 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.125 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:38.385 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.385 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:38.385 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:38.385 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:38.385 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:38.385 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:38.385 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:38.385 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:38.644 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.644 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.644 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:38.644 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.644 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.644 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:38.644 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.644 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.644 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:38.645 02:41:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:38.904 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:38.904 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:38.904 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:38.904 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:38.904 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:38.904 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:38.904 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:38.904 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.164 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:39.423 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.423 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:39.423 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:39.423 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:39.423 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:39.423 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:39.423 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:39.423 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:39.423 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.423 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.423 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:39.681 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.939 02:41:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:39.939 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:39.939 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:39.939 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:39.939 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:39.939 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:39.939 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:39.939 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.939 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.939 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:39.939 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:39.939 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:39.939 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.198 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:40.457 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:40.457 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:40.457 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:40.457 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:40.457 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:40.457 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.458 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.458 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:40.458 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:40.458 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.458 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.458 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:40.717 02:41:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:40.976 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:40.976 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:40.976 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:40.976 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:40.976 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:40.976 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:40.976 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.976 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.976 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:40.976 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:40.976 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:40.976 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.235 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.236 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:41.236 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.236 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.236 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:41.236 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.236 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:41.236 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:41.236 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:41.494 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:41.494 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:41.494 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:41.494 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:41.494 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.494 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.494 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:41.494 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.495 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.495 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:41.495 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.495 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.495 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.495 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:41.495 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.495 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:41.753 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:41.754 02:41:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.012 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:42.270 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.529 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:42.788 02:41:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:42.788 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.047 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:43.306 rmmod nvme_rdma 00:15:43.306 rmmod nvme_fabrics 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 762451 ']' 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 762451 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' -z 762451 ']' 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # kill -0 762451 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # uname 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 762451 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 762451' 00:15:43.306 killing process with pid 762451 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # kill 762451 00:15:43.306 [2024-05-15 02:41:46.507234] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:43.306 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # wait 762451 00:15:43.306 [2024-05-15 02:41:46.591785] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:15:43.565 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.565 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:43.565 00:15:43.565 real 0m49.965s 00:15:43.565 user 3m35.116s 00:15:43.565 sys 0m16.367s 00:15:43.565 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:43.565 02:41:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:43.565 ************************************ 00:15:43.565 END TEST nvmf_ns_hotplug_stress 00:15:43.565 ************************************ 00:15:43.824 02:41:46 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:15:43.824 02:41:46 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:43.825 02:41:46 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:43.825 02:41:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:43.825 ************************************ 00:15:43.825 START TEST nvmf_connect_stress 00:15:43.825 ************************************ 00:15:43.825 02:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:15:43.825 * Looking for test storage... 00:15:43.825 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:43.825 02:41:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:50.393 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:50.393 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:50.393 Found net devices under 0000:18:00.0: mlx_0_0 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:50.393 Found net devices under 0000:18:00.1: mlx_0_1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:50.393 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:50.393 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:15:50.393 altname enp24s0f0np0 00:15:50.393 altname ens785f0np0 00:15:50.393 inet 192.168.100.8/24 scope global mlx_0_0 00:15:50.393 valid_lft forever preferred_lft forever 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:50.393 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:50.393 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:15:50.393 altname enp24s0f1np1 00:15:50.393 altname ens785f1np1 00:15:50.393 inet 192.168.100.9/24 scope global mlx_0_1 00:15:50.393 valid_lft forever preferred_lft forever 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:50.393 192.168.100.9' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:50.393 192.168.100.9' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:50.393 192.168.100.9' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=771249 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 771249 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@828 -- # '[' -z 771249 ']' 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:50.393 02:41:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.650 [2024-05-15 02:41:53.707911] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:15:50.650 [2024-05-15 02:41:53.707987] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.650 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.650 [2024-05-15 02:41:53.805816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:50.650 [2024-05-15 02:41:53.852725] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.650 [2024-05-15 02:41:53.852769] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.650 [2024-05-15 02:41:53.852784] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.650 [2024-05-15 02:41:53.852797] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.650 [2024-05-15 02:41:53.852808] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.650 [2024-05-15 02:41:53.854915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.650 [2024-05-15 02:41:53.855013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:50.650 [2024-05-15 02:41:53.855014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.908 02:41:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:50.908 02:41:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@861 -- # return 0 00:15:50.908 02:41:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:50.908 02:41:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:50.908 02:41:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.908 [2024-05-15 02:41:54.038753] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22cd560/0x22d1a50) succeed. 00:15:50.908 [2024-05-15 02:41:54.053647] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22ceb00/0x23130e0) succeed. 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:50.908 [2024-05-15 02:41:54.190083] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:50.908 [2024-05-15 02:41:54.190422] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:50.908 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.166 NULL1 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=771396 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.166 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.424 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.424 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:51.424 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.424 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.424 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.988 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.988 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:51.988 02:41:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.988 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.988 02:41:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.246 02:41:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.246 02:41:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:52.246 02:41:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.246 02:41:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.246 02:41:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.504 02:41:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.504 02:41:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:52.504 02:41:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.504 02:41:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.504 02:41:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.761 02:41:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.761 02:41:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:52.761 02:41:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.761 02:41:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.761 02:41:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.017 02:41:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:53.017 02:41:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:53.017 02:41:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.017 02:41:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:53.017 02:41:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.582 02:41:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:53.582 02:41:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:53.582 02:41:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.582 02:41:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:53.582 02:41:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.840 02:41:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:53.840 02:41:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:53.840 02:41:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.840 02:41:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:53.840 02:41:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.098 02:41:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.098 02:41:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:54.098 02:41:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.098 02:41:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.098 02:41:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.356 02:41:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.356 02:41:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:54.356 02:41:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.356 02:41:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.356 02:41:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.923 02:41:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:54.923 02:41:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:54.923 02:41:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.923 02:41:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:54.923 02:41:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.181 02:41:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.181 02:41:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:55.181 02:41:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.181 02:41:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.181 02:41:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.439 02:41:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.439 02:41:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:55.439 02:41:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.439 02:41:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.439 02:41:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.696 02:41:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.696 02:41:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:55.696 02:41:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.696 02:41:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.696 02:41:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.954 02:41:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.954 02:41:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:55.955 02:41:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.955 02:41:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.955 02:41:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.522 02:41:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.522 02:41:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:56.522 02:41:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.522 02:41:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.523 02:41:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.781 02:41:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.781 02:41:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:56.781 02:41:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.781 02:41:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.781 02:41:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.039 02:42:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:57.039 02:42:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:57.039 02:42:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.039 02:42:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:57.039 02:42:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.297 02:42:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:57.297 02:42:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:57.297 02:42:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.297 02:42:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:57.297 02:42:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.863 02:42:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:57.863 02:42:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:57.863 02:42:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.863 02:42:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:57.863 02:42:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.120 02:42:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:58.120 02:42:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:58.120 02:42:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.120 02:42:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:58.120 02:42:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.377 02:42:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:58.377 02:42:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:58.377 02:42:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.377 02:42:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:58.377 02:42:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.635 02:42:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:58.635 02:42:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:58.635 02:42:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.635 02:42:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:58.635 02:42:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.892 02:42:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:58.892 02:42:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:58.892 02:42:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.892 02:42:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:58.892 02:42:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.457 02:42:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.457 02:42:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:59.457 02:42:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.457 02:42:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.457 02:42:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.715 02:42:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.715 02:42:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:59.715 02:42:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.715 02:42:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.715 02:42:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.972 02:42:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.972 02:42:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:15:59.972 02:42:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.972 02:42:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.972 02:42:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.230 02:42:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.230 02:42:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:16:00.230 02:42:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.230 02:42:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.230 02:42:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.794 02:42:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.794 02:42:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:16:00.794 02:42:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.794 02:42:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.794 02:42:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.052 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:01.052 02:42:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:16:01.052 02:42:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.052 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:01.052 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.310 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 771396 00:16:01.310 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (771396) - No such process 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 771396 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:01.310 rmmod nvme_rdma 00:16:01.310 rmmod nvme_fabrics 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 771249 ']' 00:16:01.310 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 771249 00:16:01.311 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' -z 771249 ']' 00:16:01.311 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@951 -- # kill -0 771249 00:16:01.311 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # uname 00:16:01.311 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:01.311 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 771249 00:16:01.311 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:16:01.311 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:16:01.311 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 771249' 00:16:01.311 killing process with pid 771249 00:16:01.311 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@966 -- # kill 771249 00:16:01.311 [2024-05-15 02:42:04.572940] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:01.311 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@971 -- # wait 771249 00:16:01.569 [2024-05-15 02:42:04.659618] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:16:01.569 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:01.569 02:42:04 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:01.569 00:16:01.569 real 0m17.951s 00:16:01.569 user 0m40.382s 00:16:01.569 sys 0m7.716s 00:16:01.569 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:01.569 02:42:04 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.569 ************************************ 00:16:01.569 END TEST nvmf_connect_stress 00:16:01.569 ************************************ 00:16:01.828 02:42:04 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:01.828 02:42:04 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:01.828 02:42:04 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:01.828 02:42:04 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:01.828 ************************************ 00:16:01.828 START TEST nvmf_fused_ordering 00:16:01.828 ************************************ 00:16:01.828 02:42:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:01.828 * Looking for test storage... 00:16:01.828 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.828 02:42:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:08.429 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:08.430 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:08.430 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:08.430 Found net devices under 0000:18:00.0: mlx_0_0 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:08.430 Found net devices under 0000:18:00.1: mlx_0_1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:08.430 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:08.430 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:16:08.430 altname enp24s0f0np0 00:16:08.430 altname ens785f0np0 00:16:08.430 inet 192.168.100.8/24 scope global mlx_0_0 00:16:08.430 valid_lft forever preferred_lft forever 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:08.430 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:08.430 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:16:08.430 altname enp24s0f1np1 00:16:08.430 altname ens785f1np1 00:16:08.430 inet 192.168.100.9/24 scope global mlx_0_1 00:16:08.430 valid_lft forever preferred_lft forever 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:08.430 192.168.100.9' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:08.430 192.168.100.9' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:08.430 192.168.100.9' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=776136 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 776136 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # '[' -z 776136 ']' 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:08.430 02:42:11 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:08.430 [2024-05-15 02:42:11.480070] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:16:08.430 [2024-05-15 02:42:11.480150] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.430 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.430 [2024-05-15 02:42:11.583182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.430 [2024-05-15 02:42:11.629020] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.430 [2024-05-15 02:42:11.629069] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.430 [2024-05-15 02:42:11.629084] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.430 [2024-05-15 02:42:11.629097] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.430 [2024-05-15 02:42:11.629108] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.430 [2024-05-15 02:42:11.629137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.012 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:09.012 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@861 -- # return 0 00:16:09.012 02:42:12 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.012 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:09.012 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.272 [2024-05-15 02:42:12.366133] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x215ef50/0x2163440) succeed. 00:16:09.272 [2024-05-15 02:42:12.379623] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2160450/0x21a4ad0) succeed. 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.272 [2024-05-15 02:42:12.445819] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:09.272 [2024-05-15 02:42:12.446201] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.272 NULL1 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:09.272 02:42:12 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:09.272 [2024-05-15 02:42:12.502567] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:16:09.272 [2024-05-15 02:42:12.502633] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776337 ] 00:16:09.272 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.532 Attached to nqn.2016-06.io.spdk:cnode1 00:16:09.532 Namespace ID: 1 size: 1GB 00:16:09.532 fused_ordering(0) 00:16:09.532 fused_ordering(1) 00:16:09.532 fused_ordering(2) 00:16:09.532 fused_ordering(3) 00:16:09.532 fused_ordering(4) 00:16:09.532 fused_ordering(5) 00:16:09.532 fused_ordering(6) 00:16:09.532 fused_ordering(7) 00:16:09.532 fused_ordering(8) 00:16:09.532 fused_ordering(9) 00:16:09.532 fused_ordering(10) 00:16:09.532 fused_ordering(11) 00:16:09.532 fused_ordering(12) 00:16:09.532 fused_ordering(13) 00:16:09.532 fused_ordering(14) 00:16:09.532 fused_ordering(15) 00:16:09.532 fused_ordering(16) 00:16:09.532 fused_ordering(17) 00:16:09.532 fused_ordering(18) 00:16:09.532 fused_ordering(19) 00:16:09.532 fused_ordering(20) 00:16:09.532 fused_ordering(21) 00:16:09.532 fused_ordering(22) 00:16:09.532 fused_ordering(23) 00:16:09.532 fused_ordering(24) 00:16:09.532 fused_ordering(25) 00:16:09.532 fused_ordering(26) 00:16:09.532 fused_ordering(27) 00:16:09.532 fused_ordering(28) 00:16:09.532 fused_ordering(29) 00:16:09.532 fused_ordering(30) 00:16:09.532 fused_ordering(31) 00:16:09.532 fused_ordering(32) 00:16:09.532 fused_ordering(33) 00:16:09.532 fused_ordering(34) 00:16:09.532 fused_ordering(35) 00:16:09.532 fused_ordering(36) 00:16:09.532 fused_ordering(37) 00:16:09.532 fused_ordering(38) 00:16:09.532 fused_ordering(39) 00:16:09.532 fused_ordering(40) 00:16:09.532 fused_ordering(41) 00:16:09.532 fused_ordering(42) 00:16:09.532 fused_ordering(43) 00:16:09.532 fused_ordering(44) 00:16:09.532 fused_ordering(45) 00:16:09.532 fused_ordering(46) 00:16:09.532 fused_ordering(47) 00:16:09.532 fused_ordering(48) 00:16:09.532 fused_ordering(49) 00:16:09.532 fused_ordering(50) 00:16:09.532 fused_ordering(51) 00:16:09.532 fused_ordering(52) 00:16:09.532 fused_ordering(53) 00:16:09.532 fused_ordering(54) 00:16:09.532 fused_ordering(55) 00:16:09.532 fused_ordering(56) 00:16:09.532 fused_ordering(57) 00:16:09.532 fused_ordering(58) 00:16:09.532 fused_ordering(59) 00:16:09.532 fused_ordering(60) 00:16:09.532 fused_ordering(61) 00:16:09.532 fused_ordering(62) 00:16:09.532 fused_ordering(63) 00:16:09.532 fused_ordering(64) 00:16:09.532 fused_ordering(65) 00:16:09.532 fused_ordering(66) 00:16:09.532 fused_ordering(67) 00:16:09.532 fused_ordering(68) 00:16:09.532 fused_ordering(69) 00:16:09.532 fused_ordering(70) 00:16:09.532 fused_ordering(71) 00:16:09.532 fused_ordering(72) 00:16:09.532 fused_ordering(73) 00:16:09.532 fused_ordering(74) 00:16:09.532 fused_ordering(75) 00:16:09.532 fused_ordering(76) 00:16:09.532 fused_ordering(77) 00:16:09.532 fused_ordering(78) 00:16:09.532 fused_ordering(79) 00:16:09.532 fused_ordering(80) 00:16:09.532 fused_ordering(81) 00:16:09.532 fused_ordering(82) 00:16:09.532 fused_ordering(83) 00:16:09.532 fused_ordering(84) 00:16:09.532 fused_ordering(85) 00:16:09.532 fused_ordering(86) 00:16:09.532 fused_ordering(87) 00:16:09.532 fused_ordering(88) 00:16:09.532 fused_ordering(89) 00:16:09.532 fused_ordering(90) 00:16:09.532 fused_ordering(91) 00:16:09.532 fused_ordering(92) 00:16:09.532 fused_ordering(93) 00:16:09.532 fused_ordering(94) 00:16:09.532 fused_ordering(95) 00:16:09.532 fused_ordering(96) 00:16:09.532 fused_ordering(97) 00:16:09.532 fused_ordering(98) 00:16:09.532 fused_ordering(99) 00:16:09.532 fused_ordering(100) 00:16:09.532 fused_ordering(101) 00:16:09.532 fused_ordering(102) 00:16:09.532 fused_ordering(103) 00:16:09.532 fused_ordering(104) 00:16:09.532 fused_ordering(105) 00:16:09.532 fused_ordering(106) 00:16:09.532 fused_ordering(107) 00:16:09.532 fused_ordering(108) 00:16:09.532 fused_ordering(109) 00:16:09.532 fused_ordering(110) 00:16:09.532 fused_ordering(111) 00:16:09.532 fused_ordering(112) 00:16:09.532 fused_ordering(113) 00:16:09.532 fused_ordering(114) 00:16:09.532 fused_ordering(115) 00:16:09.532 fused_ordering(116) 00:16:09.532 fused_ordering(117) 00:16:09.532 fused_ordering(118) 00:16:09.532 fused_ordering(119) 00:16:09.532 fused_ordering(120) 00:16:09.532 fused_ordering(121) 00:16:09.532 fused_ordering(122) 00:16:09.532 fused_ordering(123) 00:16:09.532 fused_ordering(124) 00:16:09.532 fused_ordering(125) 00:16:09.532 fused_ordering(126) 00:16:09.532 fused_ordering(127) 00:16:09.532 fused_ordering(128) 00:16:09.532 fused_ordering(129) 00:16:09.532 fused_ordering(130) 00:16:09.532 fused_ordering(131) 00:16:09.532 fused_ordering(132) 00:16:09.532 fused_ordering(133) 00:16:09.532 fused_ordering(134) 00:16:09.532 fused_ordering(135) 00:16:09.532 fused_ordering(136) 00:16:09.532 fused_ordering(137) 00:16:09.532 fused_ordering(138) 00:16:09.532 fused_ordering(139) 00:16:09.532 fused_ordering(140) 00:16:09.532 fused_ordering(141) 00:16:09.532 fused_ordering(142) 00:16:09.532 fused_ordering(143) 00:16:09.532 fused_ordering(144) 00:16:09.532 fused_ordering(145) 00:16:09.532 fused_ordering(146) 00:16:09.532 fused_ordering(147) 00:16:09.532 fused_ordering(148) 00:16:09.532 fused_ordering(149) 00:16:09.532 fused_ordering(150) 00:16:09.532 fused_ordering(151) 00:16:09.532 fused_ordering(152) 00:16:09.532 fused_ordering(153) 00:16:09.532 fused_ordering(154) 00:16:09.532 fused_ordering(155) 00:16:09.532 fused_ordering(156) 00:16:09.532 fused_ordering(157) 00:16:09.532 fused_ordering(158) 00:16:09.532 fused_ordering(159) 00:16:09.532 fused_ordering(160) 00:16:09.532 fused_ordering(161) 00:16:09.532 fused_ordering(162) 00:16:09.532 fused_ordering(163) 00:16:09.532 fused_ordering(164) 00:16:09.532 fused_ordering(165) 00:16:09.532 fused_ordering(166) 00:16:09.532 fused_ordering(167) 00:16:09.532 fused_ordering(168) 00:16:09.532 fused_ordering(169) 00:16:09.532 fused_ordering(170) 00:16:09.532 fused_ordering(171) 00:16:09.532 fused_ordering(172) 00:16:09.532 fused_ordering(173) 00:16:09.532 fused_ordering(174) 00:16:09.532 fused_ordering(175) 00:16:09.532 fused_ordering(176) 00:16:09.532 fused_ordering(177) 00:16:09.532 fused_ordering(178) 00:16:09.532 fused_ordering(179) 00:16:09.532 fused_ordering(180) 00:16:09.532 fused_ordering(181) 00:16:09.532 fused_ordering(182) 00:16:09.532 fused_ordering(183) 00:16:09.532 fused_ordering(184) 00:16:09.532 fused_ordering(185) 00:16:09.532 fused_ordering(186) 00:16:09.532 fused_ordering(187) 00:16:09.532 fused_ordering(188) 00:16:09.532 fused_ordering(189) 00:16:09.532 fused_ordering(190) 00:16:09.532 fused_ordering(191) 00:16:09.532 fused_ordering(192) 00:16:09.532 fused_ordering(193) 00:16:09.532 fused_ordering(194) 00:16:09.532 fused_ordering(195) 00:16:09.532 fused_ordering(196) 00:16:09.532 fused_ordering(197) 00:16:09.532 fused_ordering(198) 00:16:09.532 fused_ordering(199) 00:16:09.532 fused_ordering(200) 00:16:09.532 fused_ordering(201) 00:16:09.532 fused_ordering(202) 00:16:09.532 fused_ordering(203) 00:16:09.532 fused_ordering(204) 00:16:09.532 fused_ordering(205) 00:16:09.792 fused_ordering(206) 00:16:09.792 fused_ordering(207) 00:16:09.792 fused_ordering(208) 00:16:09.792 fused_ordering(209) 00:16:09.792 fused_ordering(210) 00:16:09.792 fused_ordering(211) 00:16:09.792 fused_ordering(212) 00:16:09.792 fused_ordering(213) 00:16:09.792 fused_ordering(214) 00:16:09.792 fused_ordering(215) 00:16:09.792 fused_ordering(216) 00:16:09.792 fused_ordering(217) 00:16:09.792 fused_ordering(218) 00:16:09.792 fused_ordering(219) 00:16:09.792 fused_ordering(220) 00:16:09.792 fused_ordering(221) 00:16:09.792 fused_ordering(222) 00:16:09.792 fused_ordering(223) 00:16:09.792 fused_ordering(224) 00:16:09.792 fused_ordering(225) 00:16:09.792 fused_ordering(226) 00:16:09.792 fused_ordering(227) 00:16:09.792 fused_ordering(228) 00:16:09.792 fused_ordering(229) 00:16:09.792 fused_ordering(230) 00:16:09.792 fused_ordering(231) 00:16:09.792 fused_ordering(232) 00:16:09.792 fused_ordering(233) 00:16:09.792 fused_ordering(234) 00:16:09.792 fused_ordering(235) 00:16:09.792 fused_ordering(236) 00:16:09.792 fused_ordering(237) 00:16:09.792 fused_ordering(238) 00:16:09.792 fused_ordering(239) 00:16:09.792 fused_ordering(240) 00:16:09.792 fused_ordering(241) 00:16:09.792 fused_ordering(242) 00:16:09.792 fused_ordering(243) 00:16:09.792 fused_ordering(244) 00:16:09.792 fused_ordering(245) 00:16:09.792 fused_ordering(246) 00:16:09.792 fused_ordering(247) 00:16:09.792 fused_ordering(248) 00:16:09.792 fused_ordering(249) 00:16:09.792 fused_ordering(250) 00:16:09.792 fused_ordering(251) 00:16:09.792 fused_ordering(252) 00:16:09.792 fused_ordering(253) 00:16:09.792 fused_ordering(254) 00:16:09.792 fused_ordering(255) 00:16:09.792 fused_ordering(256) 00:16:09.792 fused_ordering(257) 00:16:09.792 fused_ordering(258) 00:16:09.792 fused_ordering(259) 00:16:09.792 fused_ordering(260) 00:16:09.792 fused_ordering(261) 00:16:09.792 fused_ordering(262) 00:16:09.792 fused_ordering(263) 00:16:09.792 fused_ordering(264) 00:16:09.792 fused_ordering(265) 00:16:09.792 fused_ordering(266) 00:16:09.792 fused_ordering(267) 00:16:09.792 fused_ordering(268) 00:16:09.792 fused_ordering(269) 00:16:09.792 fused_ordering(270) 00:16:09.792 fused_ordering(271) 00:16:09.792 fused_ordering(272) 00:16:09.792 fused_ordering(273) 00:16:09.792 fused_ordering(274) 00:16:09.792 fused_ordering(275) 00:16:09.792 fused_ordering(276) 00:16:09.792 fused_ordering(277) 00:16:09.792 fused_ordering(278) 00:16:09.792 fused_ordering(279) 00:16:09.792 fused_ordering(280) 00:16:09.792 fused_ordering(281) 00:16:09.792 fused_ordering(282) 00:16:09.792 fused_ordering(283) 00:16:09.792 fused_ordering(284) 00:16:09.792 fused_ordering(285) 00:16:09.792 fused_ordering(286) 00:16:09.792 fused_ordering(287) 00:16:09.792 fused_ordering(288) 00:16:09.792 fused_ordering(289) 00:16:09.793 fused_ordering(290) 00:16:09.793 fused_ordering(291) 00:16:09.793 fused_ordering(292) 00:16:09.793 fused_ordering(293) 00:16:09.793 fused_ordering(294) 00:16:09.793 fused_ordering(295) 00:16:09.793 fused_ordering(296) 00:16:09.793 fused_ordering(297) 00:16:09.793 fused_ordering(298) 00:16:09.793 fused_ordering(299) 00:16:09.793 fused_ordering(300) 00:16:09.793 fused_ordering(301) 00:16:09.793 fused_ordering(302) 00:16:09.793 fused_ordering(303) 00:16:09.793 fused_ordering(304) 00:16:09.793 fused_ordering(305) 00:16:09.793 fused_ordering(306) 00:16:09.793 fused_ordering(307) 00:16:09.793 fused_ordering(308) 00:16:09.793 fused_ordering(309) 00:16:09.793 fused_ordering(310) 00:16:09.793 fused_ordering(311) 00:16:09.793 fused_ordering(312) 00:16:09.793 fused_ordering(313) 00:16:09.793 fused_ordering(314) 00:16:09.793 fused_ordering(315) 00:16:09.793 fused_ordering(316) 00:16:09.793 fused_ordering(317) 00:16:09.793 fused_ordering(318) 00:16:09.793 fused_ordering(319) 00:16:09.793 fused_ordering(320) 00:16:09.793 fused_ordering(321) 00:16:09.793 fused_ordering(322) 00:16:09.793 fused_ordering(323) 00:16:09.793 fused_ordering(324) 00:16:09.793 fused_ordering(325) 00:16:09.793 fused_ordering(326) 00:16:09.793 fused_ordering(327) 00:16:09.793 fused_ordering(328) 00:16:09.793 fused_ordering(329) 00:16:09.793 fused_ordering(330) 00:16:09.793 fused_ordering(331) 00:16:09.793 fused_ordering(332) 00:16:09.793 fused_ordering(333) 00:16:09.793 fused_ordering(334) 00:16:09.793 fused_ordering(335) 00:16:09.793 fused_ordering(336) 00:16:09.793 fused_ordering(337) 00:16:09.793 fused_ordering(338) 00:16:09.793 fused_ordering(339) 00:16:09.793 fused_ordering(340) 00:16:09.793 fused_ordering(341) 00:16:09.793 fused_ordering(342) 00:16:09.793 fused_ordering(343) 00:16:09.793 fused_ordering(344) 00:16:09.793 fused_ordering(345) 00:16:09.793 fused_ordering(346) 00:16:09.793 fused_ordering(347) 00:16:09.793 fused_ordering(348) 00:16:09.793 fused_ordering(349) 00:16:09.793 fused_ordering(350) 00:16:09.793 fused_ordering(351) 00:16:09.793 fused_ordering(352) 00:16:09.793 fused_ordering(353) 00:16:09.793 fused_ordering(354) 00:16:09.793 fused_ordering(355) 00:16:09.793 fused_ordering(356) 00:16:09.793 fused_ordering(357) 00:16:09.793 fused_ordering(358) 00:16:09.793 fused_ordering(359) 00:16:09.793 fused_ordering(360) 00:16:09.793 fused_ordering(361) 00:16:09.793 fused_ordering(362) 00:16:09.793 fused_ordering(363) 00:16:09.793 fused_ordering(364) 00:16:09.793 fused_ordering(365) 00:16:09.793 fused_ordering(366) 00:16:09.793 fused_ordering(367) 00:16:09.793 fused_ordering(368) 00:16:09.793 fused_ordering(369) 00:16:09.793 fused_ordering(370) 00:16:09.793 fused_ordering(371) 00:16:09.793 fused_ordering(372) 00:16:09.793 fused_ordering(373) 00:16:09.793 fused_ordering(374) 00:16:09.793 fused_ordering(375) 00:16:09.793 fused_ordering(376) 00:16:09.793 fused_ordering(377) 00:16:09.793 fused_ordering(378) 00:16:09.793 fused_ordering(379) 00:16:09.793 fused_ordering(380) 00:16:09.793 fused_ordering(381) 00:16:09.793 fused_ordering(382) 00:16:09.793 fused_ordering(383) 00:16:09.793 fused_ordering(384) 00:16:09.793 fused_ordering(385) 00:16:09.793 fused_ordering(386) 00:16:09.793 fused_ordering(387) 00:16:09.793 fused_ordering(388) 00:16:09.793 fused_ordering(389) 00:16:09.793 fused_ordering(390) 00:16:09.793 fused_ordering(391) 00:16:09.793 fused_ordering(392) 00:16:09.793 fused_ordering(393) 00:16:09.793 fused_ordering(394) 00:16:09.793 fused_ordering(395) 00:16:09.793 fused_ordering(396) 00:16:09.793 fused_ordering(397) 00:16:09.793 fused_ordering(398) 00:16:09.793 fused_ordering(399) 00:16:09.793 fused_ordering(400) 00:16:09.793 fused_ordering(401) 00:16:09.793 fused_ordering(402) 00:16:09.793 fused_ordering(403) 00:16:09.793 fused_ordering(404) 00:16:09.793 fused_ordering(405) 00:16:09.793 fused_ordering(406) 00:16:09.793 fused_ordering(407) 00:16:09.793 fused_ordering(408) 00:16:09.793 fused_ordering(409) 00:16:09.793 fused_ordering(410) 00:16:09.793 fused_ordering(411) 00:16:09.793 fused_ordering(412) 00:16:09.793 fused_ordering(413) 00:16:09.793 fused_ordering(414) 00:16:09.793 fused_ordering(415) 00:16:09.793 fused_ordering(416) 00:16:09.793 fused_ordering(417) 00:16:09.793 fused_ordering(418) 00:16:09.793 fused_ordering(419) 00:16:09.793 fused_ordering(420) 00:16:09.793 fused_ordering(421) 00:16:09.793 fused_ordering(422) 00:16:09.793 fused_ordering(423) 00:16:09.793 fused_ordering(424) 00:16:09.793 fused_ordering(425) 00:16:09.793 fused_ordering(426) 00:16:09.793 fused_ordering(427) 00:16:09.793 fused_ordering(428) 00:16:09.793 fused_ordering(429) 00:16:09.793 fused_ordering(430) 00:16:09.793 fused_ordering(431) 00:16:09.793 fused_ordering(432) 00:16:09.793 fused_ordering(433) 00:16:09.793 fused_ordering(434) 00:16:09.793 fused_ordering(435) 00:16:09.793 fused_ordering(436) 00:16:09.793 fused_ordering(437) 00:16:09.793 fused_ordering(438) 00:16:09.793 fused_ordering(439) 00:16:09.793 fused_ordering(440) 00:16:09.793 fused_ordering(441) 00:16:09.793 fused_ordering(442) 00:16:09.793 fused_ordering(443) 00:16:09.793 fused_ordering(444) 00:16:09.793 fused_ordering(445) 00:16:09.793 fused_ordering(446) 00:16:09.793 fused_ordering(447) 00:16:09.793 fused_ordering(448) 00:16:09.793 fused_ordering(449) 00:16:09.793 fused_ordering(450) 00:16:09.793 fused_ordering(451) 00:16:09.793 fused_ordering(452) 00:16:09.793 fused_ordering(453) 00:16:09.793 fused_ordering(454) 00:16:09.793 fused_ordering(455) 00:16:09.793 fused_ordering(456) 00:16:09.793 fused_ordering(457) 00:16:09.793 fused_ordering(458) 00:16:09.793 fused_ordering(459) 00:16:09.793 fused_ordering(460) 00:16:09.793 fused_ordering(461) 00:16:09.793 fused_ordering(462) 00:16:09.793 fused_ordering(463) 00:16:09.793 fused_ordering(464) 00:16:09.793 fused_ordering(465) 00:16:09.793 fused_ordering(466) 00:16:09.793 fused_ordering(467) 00:16:09.793 fused_ordering(468) 00:16:09.793 fused_ordering(469) 00:16:09.793 fused_ordering(470) 00:16:09.793 fused_ordering(471) 00:16:09.793 fused_ordering(472) 00:16:09.793 fused_ordering(473) 00:16:09.793 fused_ordering(474) 00:16:09.793 fused_ordering(475) 00:16:09.793 fused_ordering(476) 00:16:09.793 fused_ordering(477) 00:16:09.793 fused_ordering(478) 00:16:09.793 fused_ordering(479) 00:16:09.793 fused_ordering(480) 00:16:09.793 fused_ordering(481) 00:16:09.793 fused_ordering(482) 00:16:09.793 fused_ordering(483) 00:16:09.793 fused_ordering(484) 00:16:09.793 fused_ordering(485) 00:16:09.793 fused_ordering(486) 00:16:09.793 fused_ordering(487) 00:16:09.793 fused_ordering(488) 00:16:09.793 fused_ordering(489) 00:16:09.793 fused_ordering(490) 00:16:09.793 fused_ordering(491) 00:16:09.793 fused_ordering(492) 00:16:09.793 fused_ordering(493) 00:16:09.793 fused_ordering(494) 00:16:09.793 fused_ordering(495) 00:16:09.793 fused_ordering(496) 00:16:09.793 fused_ordering(497) 00:16:09.793 fused_ordering(498) 00:16:09.793 fused_ordering(499) 00:16:09.793 fused_ordering(500) 00:16:09.793 fused_ordering(501) 00:16:09.793 fused_ordering(502) 00:16:09.793 fused_ordering(503) 00:16:09.793 fused_ordering(504) 00:16:09.793 fused_ordering(505) 00:16:09.793 fused_ordering(506) 00:16:09.793 fused_ordering(507) 00:16:09.793 fused_ordering(508) 00:16:09.793 fused_ordering(509) 00:16:09.793 fused_ordering(510) 00:16:09.793 fused_ordering(511) 00:16:09.793 fused_ordering(512) 00:16:09.793 fused_ordering(513) 00:16:09.793 fused_ordering(514) 00:16:09.793 fused_ordering(515) 00:16:09.793 fused_ordering(516) 00:16:09.793 fused_ordering(517) 00:16:09.793 fused_ordering(518) 00:16:09.793 fused_ordering(519) 00:16:09.793 fused_ordering(520) 00:16:09.793 fused_ordering(521) 00:16:09.794 fused_ordering(522) 00:16:09.794 fused_ordering(523) 00:16:09.794 fused_ordering(524) 00:16:09.794 fused_ordering(525) 00:16:09.794 fused_ordering(526) 00:16:09.794 fused_ordering(527) 00:16:09.794 fused_ordering(528) 00:16:09.794 fused_ordering(529) 00:16:09.794 fused_ordering(530) 00:16:09.794 fused_ordering(531) 00:16:09.794 fused_ordering(532) 00:16:09.794 fused_ordering(533) 00:16:09.794 fused_ordering(534) 00:16:09.794 fused_ordering(535) 00:16:09.794 fused_ordering(536) 00:16:09.794 fused_ordering(537) 00:16:09.794 fused_ordering(538) 00:16:09.794 fused_ordering(539) 00:16:09.794 fused_ordering(540) 00:16:09.794 fused_ordering(541) 00:16:09.794 fused_ordering(542) 00:16:09.794 fused_ordering(543) 00:16:09.794 fused_ordering(544) 00:16:09.794 fused_ordering(545) 00:16:09.794 fused_ordering(546) 00:16:09.794 fused_ordering(547) 00:16:09.794 fused_ordering(548) 00:16:09.794 fused_ordering(549) 00:16:09.794 fused_ordering(550) 00:16:09.794 fused_ordering(551) 00:16:09.794 fused_ordering(552) 00:16:09.794 fused_ordering(553) 00:16:09.794 fused_ordering(554) 00:16:09.794 fused_ordering(555) 00:16:09.794 fused_ordering(556) 00:16:09.794 fused_ordering(557) 00:16:09.794 fused_ordering(558) 00:16:09.794 fused_ordering(559) 00:16:09.794 fused_ordering(560) 00:16:09.794 fused_ordering(561) 00:16:09.794 fused_ordering(562) 00:16:09.794 fused_ordering(563) 00:16:09.794 fused_ordering(564) 00:16:09.794 fused_ordering(565) 00:16:09.794 fused_ordering(566) 00:16:09.794 fused_ordering(567) 00:16:09.794 fused_ordering(568) 00:16:09.794 fused_ordering(569) 00:16:09.794 fused_ordering(570) 00:16:09.794 fused_ordering(571) 00:16:09.794 fused_ordering(572) 00:16:09.794 fused_ordering(573) 00:16:09.794 fused_ordering(574) 00:16:09.794 fused_ordering(575) 00:16:09.794 fused_ordering(576) 00:16:09.794 fused_ordering(577) 00:16:09.794 fused_ordering(578) 00:16:09.794 fused_ordering(579) 00:16:09.794 fused_ordering(580) 00:16:09.794 fused_ordering(581) 00:16:09.794 fused_ordering(582) 00:16:09.794 fused_ordering(583) 00:16:09.794 fused_ordering(584) 00:16:09.794 fused_ordering(585) 00:16:09.794 fused_ordering(586) 00:16:09.794 fused_ordering(587) 00:16:09.794 fused_ordering(588) 00:16:09.794 fused_ordering(589) 00:16:09.794 fused_ordering(590) 00:16:09.794 fused_ordering(591) 00:16:09.794 fused_ordering(592) 00:16:09.794 fused_ordering(593) 00:16:09.794 fused_ordering(594) 00:16:09.794 fused_ordering(595) 00:16:09.794 fused_ordering(596) 00:16:09.794 fused_ordering(597) 00:16:09.794 fused_ordering(598) 00:16:09.794 fused_ordering(599) 00:16:09.794 fused_ordering(600) 00:16:09.794 fused_ordering(601) 00:16:09.794 fused_ordering(602) 00:16:09.794 fused_ordering(603) 00:16:09.794 fused_ordering(604) 00:16:09.794 fused_ordering(605) 00:16:09.794 fused_ordering(606) 00:16:09.794 fused_ordering(607) 00:16:09.794 fused_ordering(608) 00:16:09.794 fused_ordering(609) 00:16:09.794 fused_ordering(610) 00:16:09.794 fused_ordering(611) 00:16:09.794 fused_ordering(612) 00:16:09.794 fused_ordering(613) 00:16:09.794 fused_ordering(614) 00:16:09.794 fused_ordering(615) 00:16:10.054 fused_ordering(616) 00:16:10.054 fused_ordering(617) 00:16:10.054 fused_ordering(618) 00:16:10.054 fused_ordering(619) 00:16:10.054 fused_ordering(620) 00:16:10.054 fused_ordering(621) 00:16:10.054 fused_ordering(622) 00:16:10.054 fused_ordering(623) 00:16:10.054 fused_ordering(624) 00:16:10.054 fused_ordering(625) 00:16:10.054 fused_ordering(626) 00:16:10.054 fused_ordering(627) 00:16:10.054 fused_ordering(628) 00:16:10.054 fused_ordering(629) 00:16:10.054 fused_ordering(630) 00:16:10.054 fused_ordering(631) 00:16:10.054 fused_ordering(632) 00:16:10.054 fused_ordering(633) 00:16:10.054 fused_ordering(634) 00:16:10.054 fused_ordering(635) 00:16:10.054 fused_ordering(636) 00:16:10.054 fused_ordering(637) 00:16:10.054 fused_ordering(638) 00:16:10.054 fused_ordering(639) 00:16:10.054 fused_ordering(640) 00:16:10.054 fused_ordering(641) 00:16:10.054 fused_ordering(642) 00:16:10.054 fused_ordering(643) 00:16:10.054 fused_ordering(644) 00:16:10.054 fused_ordering(645) 00:16:10.054 fused_ordering(646) 00:16:10.054 fused_ordering(647) 00:16:10.054 fused_ordering(648) 00:16:10.054 fused_ordering(649) 00:16:10.054 fused_ordering(650) 00:16:10.054 fused_ordering(651) 00:16:10.054 fused_ordering(652) 00:16:10.054 fused_ordering(653) 00:16:10.054 fused_ordering(654) 00:16:10.054 fused_ordering(655) 00:16:10.054 fused_ordering(656) 00:16:10.054 fused_ordering(657) 00:16:10.054 fused_ordering(658) 00:16:10.054 fused_ordering(659) 00:16:10.054 fused_ordering(660) 00:16:10.054 fused_ordering(661) 00:16:10.054 fused_ordering(662) 00:16:10.054 fused_ordering(663) 00:16:10.054 fused_ordering(664) 00:16:10.054 fused_ordering(665) 00:16:10.054 fused_ordering(666) 00:16:10.054 fused_ordering(667) 00:16:10.054 fused_ordering(668) 00:16:10.054 fused_ordering(669) 00:16:10.054 fused_ordering(670) 00:16:10.054 fused_ordering(671) 00:16:10.054 fused_ordering(672) 00:16:10.054 fused_ordering(673) 00:16:10.054 fused_ordering(674) 00:16:10.054 fused_ordering(675) 00:16:10.054 fused_ordering(676) 00:16:10.054 fused_ordering(677) 00:16:10.054 fused_ordering(678) 00:16:10.054 fused_ordering(679) 00:16:10.054 fused_ordering(680) 00:16:10.054 fused_ordering(681) 00:16:10.054 fused_ordering(682) 00:16:10.054 fused_ordering(683) 00:16:10.054 fused_ordering(684) 00:16:10.054 fused_ordering(685) 00:16:10.054 fused_ordering(686) 00:16:10.054 fused_ordering(687) 00:16:10.054 fused_ordering(688) 00:16:10.054 fused_ordering(689) 00:16:10.054 fused_ordering(690) 00:16:10.054 fused_ordering(691) 00:16:10.054 fused_ordering(692) 00:16:10.054 fused_ordering(693) 00:16:10.054 fused_ordering(694) 00:16:10.054 fused_ordering(695) 00:16:10.054 fused_ordering(696) 00:16:10.054 fused_ordering(697) 00:16:10.054 fused_ordering(698) 00:16:10.054 fused_ordering(699) 00:16:10.054 fused_ordering(700) 00:16:10.054 fused_ordering(701) 00:16:10.054 fused_ordering(702) 00:16:10.054 fused_ordering(703) 00:16:10.054 fused_ordering(704) 00:16:10.054 fused_ordering(705) 00:16:10.054 fused_ordering(706) 00:16:10.054 fused_ordering(707) 00:16:10.054 fused_ordering(708) 00:16:10.054 fused_ordering(709) 00:16:10.054 fused_ordering(710) 00:16:10.054 fused_ordering(711) 00:16:10.054 fused_ordering(712) 00:16:10.054 fused_ordering(713) 00:16:10.054 fused_ordering(714) 00:16:10.054 fused_ordering(715) 00:16:10.054 fused_ordering(716) 00:16:10.054 fused_ordering(717) 00:16:10.054 fused_ordering(718) 00:16:10.054 fused_ordering(719) 00:16:10.054 fused_ordering(720) 00:16:10.054 fused_ordering(721) 00:16:10.054 fused_ordering(722) 00:16:10.054 fused_ordering(723) 00:16:10.054 fused_ordering(724) 00:16:10.054 fused_ordering(725) 00:16:10.054 fused_ordering(726) 00:16:10.054 fused_ordering(727) 00:16:10.054 fused_ordering(728) 00:16:10.054 fused_ordering(729) 00:16:10.054 fused_ordering(730) 00:16:10.054 fused_ordering(731) 00:16:10.054 fused_ordering(732) 00:16:10.054 fused_ordering(733) 00:16:10.054 fused_ordering(734) 00:16:10.054 fused_ordering(735) 00:16:10.054 fused_ordering(736) 00:16:10.054 fused_ordering(737) 00:16:10.054 fused_ordering(738) 00:16:10.054 fused_ordering(739) 00:16:10.054 fused_ordering(740) 00:16:10.054 fused_ordering(741) 00:16:10.054 fused_ordering(742) 00:16:10.054 fused_ordering(743) 00:16:10.054 fused_ordering(744) 00:16:10.054 fused_ordering(745) 00:16:10.054 fused_ordering(746) 00:16:10.054 fused_ordering(747) 00:16:10.054 fused_ordering(748) 00:16:10.054 fused_ordering(749) 00:16:10.054 fused_ordering(750) 00:16:10.054 fused_ordering(751) 00:16:10.054 fused_ordering(752) 00:16:10.054 fused_ordering(753) 00:16:10.054 fused_ordering(754) 00:16:10.054 fused_ordering(755) 00:16:10.054 fused_ordering(756) 00:16:10.054 fused_ordering(757) 00:16:10.054 fused_ordering(758) 00:16:10.054 fused_ordering(759) 00:16:10.054 fused_ordering(760) 00:16:10.054 fused_ordering(761) 00:16:10.054 fused_ordering(762) 00:16:10.054 fused_ordering(763) 00:16:10.054 fused_ordering(764) 00:16:10.054 fused_ordering(765) 00:16:10.054 fused_ordering(766) 00:16:10.054 fused_ordering(767) 00:16:10.054 fused_ordering(768) 00:16:10.054 fused_ordering(769) 00:16:10.054 fused_ordering(770) 00:16:10.054 fused_ordering(771) 00:16:10.054 fused_ordering(772) 00:16:10.054 fused_ordering(773) 00:16:10.054 fused_ordering(774) 00:16:10.054 fused_ordering(775) 00:16:10.054 fused_ordering(776) 00:16:10.054 fused_ordering(777) 00:16:10.054 fused_ordering(778) 00:16:10.054 fused_ordering(779) 00:16:10.054 fused_ordering(780) 00:16:10.054 fused_ordering(781) 00:16:10.054 fused_ordering(782) 00:16:10.054 fused_ordering(783) 00:16:10.054 fused_ordering(784) 00:16:10.054 fused_ordering(785) 00:16:10.054 fused_ordering(786) 00:16:10.054 fused_ordering(787) 00:16:10.054 fused_ordering(788) 00:16:10.054 fused_ordering(789) 00:16:10.054 fused_ordering(790) 00:16:10.054 fused_ordering(791) 00:16:10.054 fused_ordering(792) 00:16:10.054 fused_ordering(793) 00:16:10.054 fused_ordering(794) 00:16:10.054 fused_ordering(795) 00:16:10.054 fused_ordering(796) 00:16:10.054 fused_ordering(797) 00:16:10.054 fused_ordering(798) 00:16:10.054 fused_ordering(799) 00:16:10.054 fused_ordering(800) 00:16:10.054 fused_ordering(801) 00:16:10.054 fused_ordering(802) 00:16:10.054 fused_ordering(803) 00:16:10.054 fused_ordering(804) 00:16:10.054 fused_ordering(805) 00:16:10.054 fused_ordering(806) 00:16:10.054 fused_ordering(807) 00:16:10.054 fused_ordering(808) 00:16:10.054 fused_ordering(809) 00:16:10.054 fused_ordering(810) 00:16:10.054 fused_ordering(811) 00:16:10.054 fused_ordering(812) 00:16:10.054 fused_ordering(813) 00:16:10.054 fused_ordering(814) 00:16:10.054 fused_ordering(815) 00:16:10.054 fused_ordering(816) 00:16:10.054 fused_ordering(817) 00:16:10.054 fused_ordering(818) 00:16:10.054 fused_ordering(819) 00:16:10.054 fused_ordering(820) 00:16:10.321 fused_ordering(821) 00:16:10.321 fused_ordering(822) 00:16:10.322 fused_ordering(823) 00:16:10.322 fused_ordering(824) 00:16:10.322 fused_ordering(825) 00:16:10.322 fused_ordering(826) 00:16:10.322 fused_ordering(827) 00:16:10.322 fused_ordering(828) 00:16:10.322 fused_ordering(829) 00:16:10.322 fused_ordering(830) 00:16:10.322 fused_ordering(831) 00:16:10.322 fused_ordering(832) 00:16:10.322 fused_ordering(833) 00:16:10.322 fused_ordering(834) 00:16:10.322 fused_ordering(835) 00:16:10.322 fused_ordering(836) 00:16:10.322 fused_ordering(837) 00:16:10.322 fused_ordering(838) 00:16:10.322 fused_ordering(839) 00:16:10.322 fused_ordering(840) 00:16:10.322 fused_ordering(841) 00:16:10.322 fused_ordering(842) 00:16:10.322 fused_ordering(843) 00:16:10.322 fused_ordering(844) 00:16:10.322 fused_ordering(845) 00:16:10.322 fused_ordering(846) 00:16:10.322 fused_ordering(847) 00:16:10.322 fused_ordering(848) 00:16:10.322 fused_ordering(849) 00:16:10.322 fused_ordering(850) 00:16:10.322 fused_ordering(851) 00:16:10.322 fused_ordering(852) 00:16:10.322 fused_ordering(853) 00:16:10.322 fused_ordering(854) 00:16:10.322 fused_ordering(855) 00:16:10.322 fused_ordering(856) 00:16:10.322 fused_ordering(857) 00:16:10.322 fused_ordering(858) 00:16:10.322 fused_ordering(859) 00:16:10.322 fused_ordering(860) 00:16:10.322 fused_ordering(861) 00:16:10.322 fused_ordering(862) 00:16:10.322 fused_ordering(863) 00:16:10.322 fused_ordering(864) 00:16:10.322 fused_ordering(865) 00:16:10.322 fused_ordering(866) 00:16:10.322 fused_ordering(867) 00:16:10.322 fused_ordering(868) 00:16:10.322 fused_ordering(869) 00:16:10.322 fused_ordering(870) 00:16:10.322 fused_ordering(871) 00:16:10.322 fused_ordering(872) 00:16:10.322 fused_ordering(873) 00:16:10.322 fused_ordering(874) 00:16:10.322 fused_ordering(875) 00:16:10.322 fused_ordering(876) 00:16:10.322 fused_ordering(877) 00:16:10.322 fused_ordering(878) 00:16:10.322 fused_ordering(879) 00:16:10.322 fused_ordering(880) 00:16:10.322 fused_ordering(881) 00:16:10.322 fused_ordering(882) 00:16:10.322 fused_ordering(883) 00:16:10.322 fused_ordering(884) 00:16:10.322 fused_ordering(885) 00:16:10.322 fused_ordering(886) 00:16:10.322 fused_ordering(887) 00:16:10.322 fused_ordering(888) 00:16:10.322 fused_ordering(889) 00:16:10.322 fused_ordering(890) 00:16:10.322 fused_ordering(891) 00:16:10.322 fused_ordering(892) 00:16:10.322 fused_ordering(893) 00:16:10.322 fused_ordering(894) 00:16:10.322 fused_ordering(895) 00:16:10.322 fused_ordering(896) 00:16:10.322 fused_ordering(897) 00:16:10.322 fused_ordering(898) 00:16:10.322 fused_ordering(899) 00:16:10.322 fused_ordering(900) 00:16:10.322 fused_ordering(901) 00:16:10.322 fused_ordering(902) 00:16:10.322 fused_ordering(903) 00:16:10.322 fused_ordering(904) 00:16:10.322 fused_ordering(905) 00:16:10.322 fused_ordering(906) 00:16:10.322 fused_ordering(907) 00:16:10.322 fused_ordering(908) 00:16:10.322 fused_ordering(909) 00:16:10.322 fused_ordering(910) 00:16:10.322 fused_ordering(911) 00:16:10.322 fused_ordering(912) 00:16:10.322 fused_ordering(913) 00:16:10.322 fused_ordering(914) 00:16:10.322 fused_ordering(915) 00:16:10.322 fused_ordering(916) 00:16:10.322 fused_ordering(917) 00:16:10.322 fused_ordering(918) 00:16:10.322 fused_ordering(919) 00:16:10.322 fused_ordering(920) 00:16:10.322 fused_ordering(921) 00:16:10.322 fused_ordering(922) 00:16:10.322 fused_ordering(923) 00:16:10.322 fused_ordering(924) 00:16:10.322 fused_ordering(925) 00:16:10.322 fused_ordering(926) 00:16:10.322 fused_ordering(927) 00:16:10.322 fused_ordering(928) 00:16:10.322 fused_ordering(929) 00:16:10.322 fused_ordering(930) 00:16:10.322 fused_ordering(931) 00:16:10.322 fused_ordering(932) 00:16:10.322 fused_ordering(933) 00:16:10.322 fused_ordering(934) 00:16:10.322 fused_ordering(935) 00:16:10.322 fused_ordering(936) 00:16:10.322 fused_ordering(937) 00:16:10.322 fused_ordering(938) 00:16:10.322 fused_ordering(939) 00:16:10.322 fused_ordering(940) 00:16:10.322 fused_ordering(941) 00:16:10.322 fused_ordering(942) 00:16:10.322 fused_ordering(943) 00:16:10.322 fused_ordering(944) 00:16:10.322 fused_ordering(945) 00:16:10.322 fused_ordering(946) 00:16:10.322 fused_ordering(947) 00:16:10.322 fused_ordering(948) 00:16:10.322 fused_ordering(949) 00:16:10.322 fused_ordering(950) 00:16:10.322 fused_ordering(951) 00:16:10.322 fused_ordering(952) 00:16:10.322 fused_ordering(953) 00:16:10.322 fused_ordering(954) 00:16:10.322 fused_ordering(955) 00:16:10.322 fused_ordering(956) 00:16:10.322 fused_ordering(957) 00:16:10.322 fused_ordering(958) 00:16:10.322 fused_ordering(959) 00:16:10.322 fused_ordering(960) 00:16:10.322 fused_ordering(961) 00:16:10.322 fused_ordering(962) 00:16:10.322 fused_ordering(963) 00:16:10.322 fused_ordering(964) 00:16:10.322 fused_ordering(965) 00:16:10.322 fused_ordering(966) 00:16:10.322 fused_ordering(967) 00:16:10.322 fused_ordering(968) 00:16:10.322 fused_ordering(969) 00:16:10.322 fused_ordering(970) 00:16:10.322 fused_ordering(971) 00:16:10.322 fused_ordering(972) 00:16:10.322 fused_ordering(973) 00:16:10.322 fused_ordering(974) 00:16:10.322 fused_ordering(975) 00:16:10.322 fused_ordering(976) 00:16:10.322 fused_ordering(977) 00:16:10.322 fused_ordering(978) 00:16:10.322 fused_ordering(979) 00:16:10.322 fused_ordering(980) 00:16:10.322 fused_ordering(981) 00:16:10.322 fused_ordering(982) 00:16:10.322 fused_ordering(983) 00:16:10.322 fused_ordering(984) 00:16:10.322 fused_ordering(985) 00:16:10.322 fused_ordering(986) 00:16:10.322 fused_ordering(987) 00:16:10.322 fused_ordering(988) 00:16:10.322 fused_ordering(989) 00:16:10.322 fused_ordering(990) 00:16:10.322 fused_ordering(991) 00:16:10.322 fused_ordering(992) 00:16:10.322 fused_ordering(993) 00:16:10.322 fused_ordering(994) 00:16:10.322 fused_ordering(995) 00:16:10.322 fused_ordering(996) 00:16:10.322 fused_ordering(997) 00:16:10.322 fused_ordering(998) 00:16:10.322 fused_ordering(999) 00:16:10.322 fused_ordering(1000) 00:16:10.322 fused_ordering(1001) 00:16:10.322 fused_ordering(1002) 00:16:10.322 fused_ordering(1003) 00:16:10.322 fused_ordering(1004) 00:16:10.322 fused_ordering(1005) 00:16:10.322 fused_ordering(1006) 00:16:10.322 fused_ordering(1007) 00:16:10.322 fused_ordering(1008) 00:16:10.322 fused_ordering(1009) 00:16:10.322 fused_ordering(1010) 00:16:10.322 fused_ordering(1011) 00:16:10.322 fused_ordering(1012) 00:16:10.322 fused_ordering(1013) 00:16:10.322 fused_ordering(1014) 00:16:10.322 fused_ordering(1015) 00:16:10.322 fused_ordering(1016) 00:16:10.322 fused_ordering(1017) 00:16:10.322 fused_ordering(1018) 00:16:10.322 fused_ordering(1019) 00:16:10.322 fused_ordering(1020) 00:16:10.322 fused_ordering(1021) 00:16:10.322 fused_ordering(1022) 00:16:10.322 fused_ordering(1023) 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:10.322 rmmod nvme_rdma 00:16:10.322 rmmod nvme_fabrics 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 776136 ']' 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 776136 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' -z 776136 ']' 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # kill -0 776136 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # uname 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 776136 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:16:10.322 02:42:13 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # echo 'killing process with pid 776136' 00:16:10.322 killing process with pid 776136 00:16:10.323 02:42:13 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # kill 776136 00:16:10.323 [2024-05-15 02:42:13.574370] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:10.323 02:42:13 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # wait 776136 00:16:10.582 [2024-05-15 02:42:13.627620] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:16:10.582 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:10.582 02:42:13 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:10.582 00:16:10.582 real 0m8.863s 00:16:10.582 user 0m5.183s 00:16:10.582 sys 0m5.329s 00:16:10.582 02:42:13 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:10.582 02:42:13 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:10.582 ************************************ 00:16:10.582 END TEST nvmf_fused_ordering 00:16:10.582 ************************************ 00:16:10.582 02:42:13 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:10.582 02:42:13 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:10.582 02:42:13 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:10.582 02:42:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:10.842 ************************************ 00:16:10.842 START TEST nvmf_delete_subsystem 00:16:10.842 ************************************ 00:16:10.842 02:42:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:10.842 * Looking for test storage... 00:16:10.842 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:10.842 02:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:10.842 02:42:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:16:10.842 02:42:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:17.415 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:17.415 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.415 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:17.416 Found net devices under 0000:18:00.0: mlx_0_0 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:17.416 Found net devices under 0000:18:00.1: mlx_0_1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:17.416 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:17.416 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:16:17.416 altname enp24s0f0np0 00:16:17.416 altname ens785f0np0 00:16:17.416 inet 192.168.100.8/24 scope global mlx_0_0 00:16:17.416 valid_lft forever preferred_lft forever 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:17.416 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:17.416 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:16:17.416 altname enp24s0f1np1 00:16:17.416 altname ens785f1np1 00:16:17.416 inet 192.168.100.9/24 scope global mlx_0_1 00:16:17.416 valid_lft forever preferred_lft forever 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:17.416 192.168.100.9' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:17.416 192.168.100.9' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:17.416 192.168.100.9' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:17.416 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=779247 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 779247 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # '[' -z 779247 ']' 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.417 [2024-05-15 02:42:20.404621] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:16:17.417 [2024-05-15 02:42:20.404687] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.417 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.417 [2024-05-15 02:42:20.510397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:17.417 [2024-05-15 02:42:20.557570] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.417 [2024-05-15 02:42:20.557618] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.417 [2024-05-15 02:42:20.557632] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.417 [2024-05-15 02:42:20.557645] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.417 [2024-05-15 02:42:20.557656] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.417 [2024-05-15 02:42:20.557711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.417 [2024-05-15 02:42:20.557717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # return 0 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:17.417 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.676 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.676 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:17.676 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.677 [2024-05-15 02:42:20.740348] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13c37a0/0x13c7c90) succeed. 00:16:17.677 [2024-05-15 02:42:20.753810] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13c4ca0/0x1409320) succeed. 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.677 [2024-05-15 02:42:20.860190] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:17.677 [2024-05-15 02:42:20.860522] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.677 NULL1 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.677 Delay0 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=779427 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:17.677 02:42:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:17.677 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.935 [2024-05-15 02:42:20.973400] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:19.860 02:42:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.860 02:42:22 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:19.860 02:42:22 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:20.797 NVMe io qpair process completion error 00:16:20.797 NVMe io qpair process completion error 00:16:20.797 NVMe io qpair process completion error 00:16:20.797 NVMe io qpair process completion error 00:16:20.797 NVMe io qpair process completion error 00:16:20.797 NVMe io qpair process completion error 00:16:20.797 02:42:24 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:20.797 02:42:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:16:20.797 02:42:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 779427 00:16:20.797 02:42:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:21.365 02:42:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:21.365 02:42:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 779427 00:16:21.365 02:42:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Read completed with error (sct=0, sc=8) 00:16:21.934 starting I/O failed: -6 00:16:21.934 Write completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 starting I/O failed: -6 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Read completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Write completed with error (sct=0, sc=8) 00:16:21.935 Initializing NVMe Controllers 00:16:21.935 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:21.935 Controller IO queue size 128, less than required. 00:16:21.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:21.935 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:21.935 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:21.935 Initialization complete. Launching workers. 00:16:21.935 ======================================================== 00:16:21.935 Latency(us) 00:16:21.935 Device Information : IOPS MiB/s Average min max 00:16:21.935 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.52 0.04 1593452.65 1000165.97 2973781.12 00:16:21.935 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.52 0.04 1594850.89 1001828.93 2973664.34 00:16:21.935 ======================================================== 00:16:21.935 Total : 161.04 0.08 1594151.77 1000165.97 2973781.12 00:16:21.935 00:16:21.935 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:21.935 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 779427 00:16:21.935 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:21.935 [2024-05-15 02:42:25.078129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:21.935 [2024-05-15 02:42:25.078177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:21.935 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 779427 00:16:22.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (779427) - No such process 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 779427 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 779427 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 779427 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:22.504 [2024-05-15 02:42:25.600573] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=779989 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:22.504 02:42:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:22.504 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.504 [2024-05-15 02:42:25.697354] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:23.072 02:42:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:23.072 02:42:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:23.072 02:42:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:23.641 02:42:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:23.641 02:42:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:23.641 02:42:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:23.900 02:42:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:23.900 02:42:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:23.900 02:42:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:24.466 02:42:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:24.466 02:42:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:24.466 02:42:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:25.033 02:42:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:25.034 02:42:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:25.034 02:42:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:25.601 02:42:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:25.601 02:42:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:25.601 02:42:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:26.169 02:42:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:26.169 02:42:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:26.169 02:42:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:26.427 02:42:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:26.427 02:42:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:26.427 02:42:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:26.996 02:42:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:26.996 02:42:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:26.996 02:42:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:27.564 02:42:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:27.564 02:42:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:27.564 02:42:30 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:28.131 02:42:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:28.131 02:42:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:28.131 02:42:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:28.390 02:42:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:28.390 02:42:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:28.390 02:42:31 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:28.958 02:42:32 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:28.958 02:42:32 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:28.958 02:42:32 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:29.525 02:42:32 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:29.525 02:42:32 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:29.525 02:42:32 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:29.784 Initializing NVMe Controllers 00:16:29.784 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:29.784 Controller IO queue size 128, less than required. 00:16:29.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:29.784 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:29.784 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:29.784 Initialization complete. Launching workers. 00:16:29.784 ======================================================== 00:16:29.784 Latency(us) 00:16:29.784 Device Information : IOPS MiB/s Average min max 00:16:29.784 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001664.11 1000068.06 1004841.99 00:16:29.784 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002987.90 1000170.03 1007826.32 00:16:29.784 ======================================================== 00:16:29.784 Total : 256.00 0.12 1002326.00 1000068.06 1007826.32 00:16:29.784 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 779989 00:16:30.043 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (779989) - No such process 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 779989 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:30.043 rmmod nvme_rdma 00:16:30.043 rmmod nvme_fabrics 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 779247 ']' 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 779247 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' -z 779247 ']' 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # kill -0 779247 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # uname 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 779247 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # echo 'killing process with pid 779247' 00:16:30.043 killing process with pid 779247 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # kill 779247 00:16:30.043 [2024-05-15 02:42:33.309174] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:30.043 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # wait 779247 00:16:30.307 [2024-05-15 02:42:33.375453] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:16:30.307 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:30.307 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:30.307 00:16:30.307 real 0m19.679s 00:16:30.307 user 0m48.931s 00:16:30.307 sys 0m6.097s 00:16:30.307 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:30.307 02:42:33 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:30.307 ************************************ 00:16:30.307 END TEST nvmf_delete_subsystem 00:16:30.307 ************************************ 00:16:30.566 02:42:33 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:16:30.566 02:42:33 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:30.566 02:42:33 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:30.566 02:42:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:30.566 ************************************ 00:16:30.566 START TEST nvmf_ns_masking 00:16:30.566 ************************************ 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:16:30.566 * Looking for test storage... 00:16:30.566 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.566 02:42:33 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=4b5d7fb0-75d2-4ed4-a467-b1ba99564f82 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:30.567 02:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:37.138 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:37.138 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:37.138 Found net devices under 0000:18:00.0: mlx_0_0 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:37.138 Found net devices under 0000:18:00.1: mlx_0_1 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:16:37.138 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:37.139 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:37.139 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:37.139 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:37.139 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:37.139 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:37.139 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:37.139 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:37.139 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:37.139 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:37.139 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:16:37.139 altname enp24s0f0np0 00:16:37.139 altname ens785f0np0 00:16:37.139 inet 192.168.100.8/24 scope global mlx_0_0 00:16:37.139 valid_lft forever preferred_lft forever 00:16:37.139 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:37.139 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:37.139 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:37.139 02:42:39 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:37.139 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:37.139 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:16:37.139 altname enp24s0f1np1 00:16:37.139 altname ens785f1np1 00:16:37.139 inet 192.168.100.9/24 scope global mlx_0_1 00:16:37.139 valid_lft forever preferred_lft forever 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:37.139 192.168.100.9' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:37.139 192.168.100.9' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:37.139 192.168.100.9' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=783899 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 783899 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@828 -- # '[' -z 783899 ']' 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:37.139 02:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:37.139 [2024-05-15 02:42:40.185739] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:16:37.139 [2024-05-15 02:42:40.185807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.139 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.139 [2024-05-15 02:42:40.293870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:37.139 [2024-05-15 02:42:40.348191] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.139 [2024-05-15 02:42:40.348239] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.139 [2024-05-15 02:42:40.348253] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.139 [2024-05-15 02:42:40.348266] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.139 [2024-05-15 02:42:40.348277] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.139 [2024-05-15 02:42:40.348341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.139 [2024-05-15 02:42:40.348427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.139 [2024-05-15 02:42:40.348529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.139 [2024-05-15 02:42:40.348529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.398 02:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:37.398 02:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@861 -- # return 0 00:16:37.398 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:37.398 02:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:37.398 02:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:37.398 02:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.398 02:42:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:37.657 [2024-05-15 02:42:40.766889] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8dcd70/0x8e1260) succeed. 00:16:37.657 [2024-05-15 02:42:40.781936] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8de3b0/0x9228f0) succeed. 00:16:37.916 02:42:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:16:37.916 02:42:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:16:37.916 02:42:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:37.916 Malloc1 00:16:38.176 02:42:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:38.176 Malloc2 00:16:38.436 02:42:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:38.695 02:42:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:38.695 02:42:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:38.953 [2024-05-15 02:42:42.144724] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:38.953 [2024-05-15 02:42:42.145059] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:38.953 02:42:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:16:38.953 02:42:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4b5d7fb0-75d2-4ed4-a467-b1ba99564f82 -a 192.168.100.8 -s 4420 -i 4 00:16:39.212 02:42:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:16:39.212 02:42:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:16:39.212 02:42:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:16:39.212 02:42:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:16:39.212 02:42:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:41.748 [ 0]:0x1 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=acc83e9fe5a647b8ba8a78056693329c 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ acc83e9fe5a647b8ba8a78056693329c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:41.748 [ 0]:0x1 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=acc83e9fe5a647b8ba8a78056693329c 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ acc83e9fe5a647b8ba8a78056693329c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:41.748 [ 1]:0x2 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8488e052fc064be79135148e22e1ba6b 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8488e052fc064be79135148e22e1ba6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:16:41.748 02:42:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.316 02:42:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.316 02:42:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:42.576 02:42:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:16:42.576 02:42:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4b5d7fb0-75d2-4ed4-a467-b1ba99564f82 -a 192.168.100.8 -s 4420 -i 4 00:16:42.835 02:42:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:42.835 02:42:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:16:42.835 02:42:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:16:42.835 02:42:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 1 ]] 00:16:42.835 02:42:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=1 00:16:42.835 02:42:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:45.370 [ 0]:0x2 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8488e052fc064be79135148e22e1ba6b 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8488e052fc064be79135148e22e1ba6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:45.370 [ 0]:0x1 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=acc83e9fe5a647b8ba8a78056693329c 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ acc83e9fe5a647b8ba8a78056693329c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:45.370 [ 1]:0x2 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8488e052fc064be79135148e22e1ba6b 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8488e052fc064be79135148e22e1ba6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.370 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:45.629 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:16:45.629 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:45.629 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:16:45.629 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:16:45.629 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:45.629 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:16:45.629 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:45.629 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:16:45.629 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:45.629 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:45.888 [ 0]:0x2 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:45.888 02:42:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:45.888 02:42:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8488e052fc064be79135148e22e1ba6b 00:16:45.888 02:42:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8488e052fc064be79135148e22e1ba6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.888 02:42:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:16:45.888 02:42:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.147 02:42:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:46.406 02:42:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:16:46.406 02:42:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4b5d7fb0-75d2-4ed4-a467-b1ba99564f82 -a 192.168.100.8 -s 4420 -i 4 00:16:46.666 02:42:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:46.666 02:42:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:16:46.666 02:42:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.666 02:42:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:16:46.666 02:42:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:16:46.666 02:42:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:49.204 [ 0]:0x1 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:49.204 02:42:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=acc83e9fe5a647b8ba8a78056693329c 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ acc83e9fe5a647b8ba8a78056693329c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:49.204 [ 1]:0x2 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8488e052fc064be79135148e22e1ba6b 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8488e052fc064be79135148e22e1ba6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:49.204 [ 0]:0x2 00:16:49.204 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8488e052fc064be79135148e22e1ba6b 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8488e052fc064be79135148e22e1ba6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:16:49.205 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:49.464 [2024-05-15 02:42:52.629100] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:49.464 request: 00:16:49.464 { 00:16:49.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.464 "nsid": 2, 00:16:49.464 "host": "nqn.2016-06.io.spdk:host1", 00:16:49.464 "method": "nvmf_ns_remove_host", 00:16:49.464 "req_id": 1 00:16:49.464 } 00:16:49.464 Got JSON-RPC error response 00:16:49.464 response: 00:16:49.464 { 00:16:49.464 "code": -32602, 00:16:49.464 "message": "Invalid parameters" 00:16:49.464 } 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:49.464 [ 0]:0x2 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:49.464 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:49.723 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8488e052fc064be79135148e22e1ba6b 00:16:49.723 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8488e052fc064be79135148e22e1ba6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:49.723 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:16:49.723 02:42:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.983 02:42:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:50.242 rmmod nvme_rdma 00:16:50.242 rmmod nvme_fabrics 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 783899 ']' 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 783899 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' -z 783899 ']' 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@951 -- # kill -0 783899 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # uname 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 783899 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@965 -- # echo 'killing process with pid 783899' 00:16:50.242 killing process with pid 783899 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # kill 783899 00:16:50.242 [2024-05-15 02:42:53.485570] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:50.242 02:42:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@971 -- # wait 783899 00:16:50.501 [2024-05-15 02:42:53.599207] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:16:50.760 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:50.760 02:42:53 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:50.760 00:16:50.760 real 0m20.186s 00:16:50.760 user 0m58.980s 00:16:50.760 sys 0m6.584s 00:16:50.760 02:42:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:50.760 02:42:53 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:50.760 ************************************ 00:16:50.760 END TEST nvmf_ns_masking 00:16:50.760 ************************************ 00:16:50.760 02:42:53 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:16:50.760 02:42:53 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:16:50.760 02:42:53 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:50.760 02:42:53 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:50.760 02:42:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:50.760 ************************************ 00:16:50.760 START TEST nvmf_nvme_cli 00:16:50.760 ************************************ 00:16:50.760 02:42:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:16:50.760 * Looking for test storage... 00:16:51.019 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.019 02:42:54 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.585 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:57.586 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:57.586 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:57.586 Found net devices under 0000:18:00.0: mlx_0_0 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:57.586 Found net devices under 0000:18:00.1: mlx_0_1 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:57.586 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:57.586 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:16:57.586 altname enp24s0f0np0 00:16:57.586 altname ens785f0np0 00:16:57.586 inet 192.168.100.8/24 scope global mlx_0_0 00:16:57.586 valid_lft forever preferred_lft forever 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:57.586 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:57.586 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:16:57.586 altname enp24s0f1np1 00:16:57.586 altname ens785f1np1 00:16:57.586 inet 192.168.100.9/24 scope global mlx_0_1 00:16:57.586 valid_lft forever preferred_lft forever 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:57.586 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:57.587 192.168.100.9' 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:57.587 192.168.100.9' 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:57.587 192.168.100.9' 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=788670 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 788670 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@828 -- # '[' -z 788670 ']' 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.587 [2024-05-15 02:43:00.466439] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:16:57.587 [2024-05-15 02:43:00.466514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.587 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.587 [2024-05-15 02:43:00.579977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.587 [2024-05-15 02:43:00.633372] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.587 [2024-05-15 02:43:00.633426] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.587 [2024-05-15 02:43:00.633440] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.587 [2024-05-15 02:43:00.633459] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.587 [2024-05-15 02:43:00.633470] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.587 [2024-05-15 02:43:00.633545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.587 [2024-05-15 02:43:00.633648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.587 [2024-05-15 02:43:00.633739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.587 [2024-05-15 02:43:00.633740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@861 -- # return 0 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.587 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.587 [2024-05-15 02:43:00.835521] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7a5d70/0x7aa260) succeed. 00:16:57.587 [2024-05-15 02:43:00.850603] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7a73b0/0x7eb8f0) succeed. 00:16:57.845 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.845 02:43:00 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:57.845 02:43:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.845 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.845 Malloc0 00:16:57.845 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.845 02:43:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:57.845 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.845 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.845 Malloc1 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.846 [2024-05-15 02:43:01.088888] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:57.846 [2024-05-15 02:43:01.089302] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.846 02:43:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -a 192.168.100.8 -s 4420 00:16:58.104 00:16:58.104 Discovery Log Number of Records 2, Generation counter 2 00:16:58.104 =====Discovery Log Entry 0====== 00:16:58.104 trtype: rdma 00:16:58.104 adrfam: ipv4 00:16:58.104 subtype: current discovery subsystem 00:16:58.104 treq: not required 00:16:58.104 portid: 0 00:16:58.104 trsvcid: 4420 00:16:58.104 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:58.104 traddr: 192.168.100.8 00:16:58.104 eflags: explicit discovery connections, duplicate discovery information 00:16:58.104 rdma_prtype: not specified 00:16:58.104 rdma_qptype: connected 00:16:58.104 rdma_cms: rdma-cm 00:16:58.104 rdma_pkey: 0x0000 00:16:58.104 =====Discovery Log Entry 1====== 00:16:58.104 trtype: rdma 00:16:58.104 adrfam: ipv4 00:16:58.104 subtype: nvme subsystem 00:16:58.104 treq: not required 00:16:58.104 portid: 0 00:16:58.104 trsvcid: 4420 00:16:58.104 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:58.104 traddr: 192.168.100.8 00:16:58.104 eflags: none 00:16:58.104 rdma_prtype: not specified 00:16:58.104 rdma_qptype: connected 00:16:58.104 rdma_cms: rdma-cm 00:16:58.104 rdma_pkey: 0x0000 00:16:58.104 02:43:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:58.104 02:43:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:58.104 02:43:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:58.104 02:43:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:58.104 02:43:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:58.104 02:43:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:58.104 02:43:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:58.104 02:43:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:58.104 02:43:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:58.104 02:43:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:58.104 02:43:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:59.037 02:43:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:59.037 02:43:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local i=0 00:16:59.037 02:43:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:16:59.037 02:43:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:16:59.037 02:43:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:16:59.037 02:43:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # sleep 2 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # return 0 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:00.932 /dev/nvme0n1 ]] 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:00.932 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:01.207 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:01.207 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:01.207 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:01.207 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:01.207 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:01.208 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:01.208 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:01.208 02:43:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:01.208 02:43:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:01.208 02:43:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:02.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # local i=0 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1228 -- # return 0 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:02.153 rmmod nvme_rdma 00:17:02.153 rmmod nvme_fabrics 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 788670 ']' 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 788670 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # '[' -z 788670 ']' 00:17:02.153 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # kill -0 788670 00:17:02.154 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # uname 00:17:02.154 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:02.154 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 788670 00:17:02.154 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:02.154 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:02.154 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # echo 'killing process with pid 788670' 00:17:02.154 killing process with pid 788670 00:17:02.154 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # kill 788670 00:17:02.154 [2024-05-15 02:43:05.339811] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:02.154 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # wait 788670 00:17:02.412 [2024-05-15 02:43:05.449942] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:17:02.413 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:02.413 02:43:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:02.413 00:17:02.413 real 0m11.747s 00:17:02.413 user 0m21.759s 00:17:02.413 sys 0m5.457s 00:17:02.413 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:02.413 02:43:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:02.413 ************************************ 00:17:02.413 END TEST nvmf_nvme_cli 00:17:02.413 ************************************ 00:17:02.672 02:43:05 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:17:02.672 02:43:05 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:02.672 02:43:05 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:02.672 02:43:05 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:02.672 02:43:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:02.672 ************************************ 00:17:02.672 START TEST nvmf_host_management 00:17:02.672 ************************************ 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:02.672 * Looking for test storage... 00:17:02.672 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.672 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:17:02.673 02:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.237 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:09.238 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:09.238 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:09.238 Found net devices under 0000:18:00.0: mlx_0_0 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:09.238 Found net devices under 0000:18:00.1: mlx_0_1 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:09.238 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:09.238 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:09.238 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:17:09.238 altname enp24s0f0np0 00:17:09.239 altname ens785f0np0 00:17:09.239 inet 192.168.100.8/24 scope global mlx_0_0 00:17:09.239 valid_lft forever preferred_lft forever 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:09.239 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:09.239 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:17:09.239 altname enp24s0f1np1 00:17:09.239 altname ens785f1np1 00:17:09.239 inet 192.168.100.9/24 scope global mlx_0_1 00:17:09.239 valid_lft forever preferred_lft forever 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:09.239 192.168.100.9' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:09.239 192.168.100.9' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:09.239 192.168.100.9' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=792164 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 792164 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 792164 ']' 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:09.239 02:43:11 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.239 [2024-05-15 02:43:11.988862] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:17:09.239 [2024-05-15 02:43:11.988956] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.239 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.239 [2024-05-15 02:43:12.093174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.239 [2024-05-15 02:43:12.140914] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.239 [2024-05-15 02:43:12.140961] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.239 [2024-05-15 02:43:12.140976] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.239 [2024-05-15 02:43:12.140989] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.239 [2024-05-15 02:43:12.141000] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.239 [2024-05-15 02:43:12.141105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.239 [2024-05-15 02:43:12.141207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.239 [2024-05-15 02:43:12.141231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:09.239 [2024-05-15 02:43:12.141236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.805 02:43:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:09.805 02:43:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:17:09.805 02:43:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.805 02:43:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:09.805 02:43:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.805 02:43:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.805 02:43:12 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:09.805 02:43:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.805 02:43:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.805 [2024-05-15 02:43:12.876150] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2383060/0x2387550) succeed. 00:17:09.805 [2024-05-15 02:43:12.891065] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23846a0/0x23c8be0) succeed. 00:17:09.805 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.805 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:09.805 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:09.805 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.805 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:09.805 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:09.805 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:09.805 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.805 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.805 Malloc0 00:17:10.064 [2024-05-15 02:43:13.106557] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:10.064 [2024-05-15 02:43:13.106968] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=792385 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 792385 /var/tmp/bdevperf.sock 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 792385 ']' 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:10.064 { 00:17:10.064 "params": { 00:17:10.064 "name": "Nvme$subsystem", 00:17:10.064 "trtype": "$TEST_TRANSPORT", 00:17:10.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.064 "adrfam": "ipv4", 00:17:10.064 "trsvcid": "$NVMF_PORT", 00:17:10.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.064 "hdgst": ${hdgst:-false}, 00:17:10.064 "ddgst": ${ddgst:-false} 00:17:10.064 }, 00:17:10.064 "method": "bdev_nvme_attach_controller" 00:17:10.064 } 00:17:10.064 EOF 00:17:10.064 )") 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:10.064 02:43:13 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:10.064 "params": { 00:17:10.064 "name": "Nvme0", 00:17:10.064 "trtype": "rdma", 00:17:10.064 "traddr": "192.168.100.8", 00:17:10.064 "adrfam": "ipv4", 00:17:10.064 "trsvcid": "4420", 00:17:10.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:10.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:10.064 "hdgst": false, 00:17:10.064 "ddgst": false 00:17:10.064 }, 00:17:10.064 "method": "bdev_nvme_attach_controller" 00:17:10.064 }' 00:17:10.064 [2024-05-15 02:43:13.212323] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:17:10.064 [2024-05-15 02:43:13.212388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792385 ] 00:17:10.064 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.064 [2024-05-15 02:43:13.306696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.322 [2024-05-15 02:43:13.354225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.322 Running I/O for 10 seconds... 00:17:10.322 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:10.322 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:17:10.322 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:10.322 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.322 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=115 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 115 -ge 100 ']' 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.580 02:43:13 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:11.515 [2024-05-15 02:43:14.684466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182600 00:17:11.515 [2024-05-15 02:43:14.684512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182600 00:17:11.515 [2024-05-15 02:43:14.684555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182600 00:17:11.515 [2024-05-15 02:43:14.684587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182600 00:17:11.515 [2024-05-15 02:43:14.684618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182600 00:17:11.515 [2024-05-15 02:43:14.684654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182600 00:17:11.515 [2024-05-15 02:43:14.684685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182600 00:17:11.515 [2024-05-15 02:43:14.684715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182600 00:17:11.515 [2024-05-15 02:43:14.684746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182600 00:17:11.515 [2024-05-15 02:43:14.684776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182600 00:17:11.515 [2024-05-15 02:43:14.684806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182100 00:17:11.515 [2024-05-15 02:43:14.684837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182100 00:17:11.515 [2024-05-15 02:43:14.684867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182100 00:17:11.515 [2024-05-15 02:43:14.684903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182100 00:17:11.515 [2024-05-15 02:43:14.684933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182100 00:17:11.515 [2024-05-15 02:43:14.684964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.684980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182500 00:17:11.515 [2024-05-15 02:43:14.684996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.515 [2024-05-15 02:43:14.685013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc40000 len:0x10000 key:0x182400 00:17:11.515 [2024-05-15 02:43:14.685027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc61000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc82000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bca3000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bcc4000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bce5000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd06000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd27000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd48000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd69000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8a000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdab000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca0b000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ea000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9c9000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9a8000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c987000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c966000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c945000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c924000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c903000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8e2000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8c1000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8a0000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.685982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc9f000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.685995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.686011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc7e000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.686025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.686043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc5d000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.686057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.686073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc3c000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.686086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.686102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1b000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.686116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.686132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfa000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.686146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.516 [2024-05-15 02:43:14.686162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbd9000 len:0x10000 key:0x182400 00:17:11.516 [2024-05-15 02:43:14.686177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.517 [2024-05-15 02:43:14.686194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbb8000 len:0x10000 key:0x182400 00:17:11.517 [2024-05-15 02:43:14.686207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.517 [2024-05-15 02:43:14.686223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb97000 len:0x10000 key:0x182400 00:17:11.517 [2024-05-15 02:43:14.686237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.517 [2024-05-15 02:43:14.686254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb76000 len:0x10000 key:0x182400 00:17:11.517 [2024-05-15 02:43:14.686267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.517 [2024-05-15 02:43:14.686283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb55000 len:0x10000 key:0x182400 00:17:11.517 [2024-05-15 02:43:14.686297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.517 [2024-05-15 02:43:14.686313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb34000 len:0x10000 key:0x182400 00:17:11.517 [2024-05-15 02:43:14.686327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.517 [2024-05-15 02:43:14.686343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb13000 len:0x10000 key:0x182400 00:17:11.517 [2024-05-15 02:43:14.686356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.517 [2024-05-15 02:43:14.686372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000caf2000 len:0x10000 key:0x182400 00:17:11.517 [2024-05-15 02:43:14.686392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.517 [2024-05-15 02:43:14.686408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cad1000 len:0x10000 key:0x182400 00:17:11.517 [2024-05-15 02:43:14.686421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.517 [2024-05-15 02:43:14.686437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cab0000 len:0x10000 key:0x182400 00:17:11.517 [2024-05-15 02:43:14.686451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:11.517 [2024-05-15 02:43:14.688494] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:17:11.517 [2024-05-15 02:43:14.689836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:11.517 task offset: 30720 on job bdev=Nvme0n1 fails 00:17:11.517 00:17:11.517 Latency(us) 00:17:11.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.517 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:11.517 Job: Nvme0n1 ended in about 1.14 seconds with error 00:17:11.517 Verification LBA range: start 0x0 length 0x400 00:17:11.517 Nvme0n1 : 1.14 168.94 10.56 56.31 0.00 280412.61 2920.63 1013927.40 00:17:11.517 =================================================================================================================== 00:17:11.517 Total : 168.94 10.56 56.31 0.00 280412.61 2920.63 1013927.40 00:17:11.517 [2024-05-15 02:43:14.692354] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:11.517 02:43:14 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 792385 00:17:11.517 02:43:14 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:11.517 02:43:14 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:11.517 02:43:14 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:11.517 02:43:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:11.517 02:43:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:11.517 02:43:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:11.517 02:43:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:11.517 { 00:17:11.517 "params": { 00:17:11.517 "name": "Nvme$subsystem", 00:17:11.517 "trtype": "$TEST_TRANSPORT", 00:17:11.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.517 "adrfam": "ipv4", 00:17:11.517 "trsvcid": "$NVMF_PORT", 00:17:11.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.517 "hdgst": ${hdgst:-false}, 00:17:11.517 "ddgst": ${ddgst:-false} 00:17:11.517 }, 00:17:11.517 "method": "bdev_nvme_attach_controller" 00:17:11.517 } 00:17:11.517 EOF 00:17:11.517 )") 00:17:11.517 02:43:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:11.517 02:43:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:11.517 02:43:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:11.517 02:43:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:11.517 "params": { 00:17:11.517 "name": "Nvme0", 00:17:11.517 "trtype": "rdma", 00:17:11.517 "traddr": "192.168.100.8", 00:17:11.517 "adrfam": "ipv4", 00:17:11.517 "trsvcid": "4420", 00:17:11.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:11.517 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:11.517 "hdgst": false, 00:17:11.517 "ddgst": false 00:17:11.517 }, 00:17:11.517 "method": "bdev_nvme_attach_controller" 00:17:11.517 }' 00:17:11.517 [2024-05-15 02:43:14.754296] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:17:11.517 [2024-05-15 02:43:14.754372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792582 ] 00:17:11.776 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.776 [2024-05-15 02:43:14.864439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.776 [2024-05-15 02:43:14.915042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.034 Running I/O for 1 seconds... 00:17:12.967 00:17:12.967 Latency(us) 00:17:12.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.967 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.967 Verification LBA range: start 0x0 length 0x400 00:17:12.967 Nvme0n1 : 1.03 2106.03 131.63 0.00 0.00 29645.28 1075.65 39663.53 00:17:12.967 =================================================================================================================== 00:17:12.967 Total : 2106.03 131.63 0.00 0.00 29645.28 1075.65 39663.53 00:17:13.224 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 792385 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:13.224 rmmod nvme_rdma 00:17:13.224 rmmod nvme_fabrics 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 792164 ']' 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 792164 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' -z 792164 ']' 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@951 -- # kill -0 792164 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # uname 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 792164 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@965 -- # echo 'killing process with pid 792164' 00:17:13.224 killing process with pid 792164 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@966 -- # kill 792164 00:17:13.224 [2024-05-15 02:43:16.466268] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:13.224 02:43:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@971 -- # wait 792164 00:17:13.481 [2024-05-15 02:43:16.567470] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:17:13.481 [2024-05-15 02:43:16.770392] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:13.740 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.740 02:43:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:13.740 02:43:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:13.740 00:17:13.740 real 0m11.011s 00:17:13.740 user 0m22.891s 00:17:13.740 sys 0m5.736s 00:17:13.740 02:43:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:13.740 02:43:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:13.740 ************************************ 00:17:13.740 END TEST nvmf_host_management 00:17:13.740 ************************************ 00:17:13.740 02:43:16 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:13.740 02:43:16 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:13.740 02:43:16 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:13.740 02:43:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:13.740 ************************************ 00:17:13.740 START TEST nvmf_lvol 00:17:13.740 ************************************ 00:17:13.740 02:43:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:13.740 * Looking for test storage... 00:17:13.740 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.740 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:13.999 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:13.999 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:13.999 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.999 02:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.999 02:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.999 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:13.999 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:13.999 02:43:17 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:13.999 02:43:17 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:20.565 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:20.566 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:20.566 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:20.566 Found net devices under 0000:18:00.0: mlx_0_0 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:20.566 Found net devices under 0000:18:00.1: mlx_0_1 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:20.566 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:20.566 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:17:20.566 altname enp24s0f0np0 00:17:20.566 altname ens785f0np0 00:17:20.566 inet 192.168.100.8/24 scope global mlx_0_0 00:17:20.566 valid_lft forever preferred_lft forever 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:20.566 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:20.566 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:17:20.566 altname enp24s0f1np1 00:17:20.566 altname ens785f1np1 00:17:20.566 inet 192.168.100.9/24 scope global mlx_0_1 00:17:20.566 valid_lft forever preferred_lft forever 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.566 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:20.567 192.168.100.9' 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:20.567 192.168.100.9' 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:20.567 192.168.100.9' 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=795674 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 795674 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@828 -- # '[' -z 795674 ']' 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:20.567 [2024-05-15 02:43:23.439876] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:17:20.567 [2024-05-15 02:43:23.439953] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.567 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.567 [2024-05-15 02:43:23.547317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:20.567 [2024-05-15 02:43:23.594635] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.567 [2024-05-15 02:43:23.594684] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.567 [2024-05-15 02:43:23.594698] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.567 [2024-05-15 02:43:23.594711] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.567 [2024-05-15 02:43:23.594721] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.567 [2024-05-15 02:43:23.594792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.567 [2024-05-15 02:43:23.594876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.567 [2024-05-15 02:43:23.594880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@861 -- # return 0 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.567 02:43:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:20.825 [2024-05-15 02:43:24.004207] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1eaf260/0x1eb3750) succeed. 00:17:20.825 [2024-05-15 02:43:24.018974] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1eb0800/0x1ef4de0) succeed. 00:17:21.083 02:43:24 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:21.340 02:43:24 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:21.340 02:43:24 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:21.598 02:43:24 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:21.598 02:43:24 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:21.854 02:43:24 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:22.112 02:43:25 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b467273a-8c7e-4238-909b-1a2fb2df7796 00:17:22.112 02:43:25 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b467273a-8c7e-4238-909b-1a2fb2df7796 lvol 20 00:17:22.370 02:43:25 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=599920fa-d3eb-4862-ad44-7c388beaa603 00:17:22.370 02:43:25 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:22.627 02:43:25 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 599920fa-d3eb-4862-ad44-7c388beaa603 00:17:22.884 02:43:26 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:23.141 [2024-05-15 02:43:26.232485] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:23.141 [2024-05-15 02:43:26.232909] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:23.141 02:43:26 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:23.398 02:43:26 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=796228 00:17:23.398 02:43:26 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:23.398 02:43:26 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:23.398 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.329 02:43:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 599920fa-d3eb-4862-ad44-7c388beaa603 MY_SNAPSHOT 00:17:24.586 02:43:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=29e8134c-974e-44cd-b150-712aa951c2a9 00:17:24.586 02:43:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 599920fa-d3eb-4862-ad44-7c388beaa603 30 00:17:24.843 02:43:28 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 29e8134c-974e-44cd-b150-712aa951c2a9 MY_CLONE 00:17:25.100 02:43:28 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e8a1724a-4c7b-442f-8411-7ab90833733c 00:17:25.100 02:43:28 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e8a1724a-4c7b-442f-8411-7ab90833733c 00:17:25.357 02:43:28 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 796228 00:17:35.351 Initializing NVMe Controllers 00:17:35.351 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:17:35.351 Controller IO queue size 128, less than required. 00:17:35.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:35.351 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:35.351 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:35.351 Initialization complete. Launching workers. 00:17:35.351 ======================================================== 00:17:35.351 Latency(us) 00:17:35.351 Device Information : IOPS MiB/s Average min max 00:17:35.351 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11826.60 46.20 10827.37 3411.60 52012.19 00:17:35.351 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16240.80 63.44 7882.81 3512.84 39143.08 00:17:35.351 ======================================================== 00:17:35.351 Total : 28067.40 109.64 9123.54 3411.60 52012.19 00:17:35.351 00:17:35.351 02:43:37 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:35.351 02:43:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 599920fa-d3eb-4862-ad44-7c388beaa603 00:17:35.351 02:43:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b467273a-8c7e-4238-909b-1a2fb2df7796 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:35.610 rmmod nvme_rdma 00:17:35.610 rmmod nvme_fabrics 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 795674 ']' 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 795674 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' -z 795674 ']' 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@951 -- # kill -0 795674 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # uname 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 795674 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@965 -- # echo 'killing process with pid 795674' 00:17:35.610 killing process with pid 795674 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@966 -- # kill 795674 00:17:35.610 [2024-05-15 02:43:38.777031] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:35.610 02:43:38 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@971 -- # wait 795674 00:17:35.610 [2024-05-15 02:43:38.869115] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:17:35.869 02:43:39 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.869 02:43:39 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:35.869 00:17:35.869 real 0m22.226s 00:17:35.869 user 1m13.583s 00:17:35.869 sys 0m6.373s 00:17:35.869 02:43:39 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:35.869 02:43:39 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:35.869 ************************************ 00:17:35.869 END TEST nvmf_lvol 00:17:35.869 ************************************ 00:17:36.127 02:43:39 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:36.127 02:43:39 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:36.127 02:43:39 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:36.127 02:43:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:36.127 ************************************ 00:17:36.127 START TEST nvmf_lvs_grow 00:17:36.127 ************************************ 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:17:36.127 * Looking for test storage... 00:17:36.127 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:36.127 02:43:39 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.688 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:42.689 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:42.689 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:42.689 Found net devices under 0000:18:00.0: mlx_0_0 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:42.689 Found net devices under 0000:18:00.1: mlx_0_1 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:42.689 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:42.689 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:17:42.689 altname enp24s0f0np0 00:17:42.689 altname ens785f0np0 00:17:42.689 inet 192.168.100.8/24 scope global mlx_0_0 00:17:42.689 valid_lft forever preferred_lft forever 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:42.689 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:42.689 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:17:42.689 altname enp24s0f1np1 00:17:42.689 altname ens785f1np1 00:17:42.689 inet 192.168.100.9/24 scope global mlx_0_1 00:17:42.689 valid_lft forever preferred_lft forever 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:42.689 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:42.690 192.168.100.9' 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:42.690 192.168.100.9' 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:42.690 192.168.100.9' 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=800598 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 800598 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # '[' -z 800598 ']' 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:42.690 02:43:45 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:42.690 [2024-05-15 02:43:45.755451] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:17:42.690 [2024-05-15 02:43:45.755527] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.690 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.690 [2024-05-15 02:43:45.866701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.690 [2024-05-15 02:43:45.911838] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.690 [2024-05-15 02:43:45.911890] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.690 [2024-05-15 02:43:45.911911] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.690 [2024-05-15 02:43:45.911924] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.690 [2024-05-15 02:43:45.911935] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.690 [2024-05-15 02:43:45.911966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.948 02:43:46 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:42.948 02:43:46 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # return 0 00:17:42.948 02:43:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.948 02:43:46 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:42.948 02:43:46 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:42.948 02:43:46 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.948 02:43:46 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:43.206 [2024-05-15 02:43:46.238154] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fcfc00/0x1fd40f0) succeed. 00:17:43.206 [2024-05-15 02:43:46.251534] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fd1100/0x2015780) succeed. 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:43.206 ************************************ 00:17:43.206 START TEST lvs_grow_clean 00:17:43.206 ************************************ 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # lvs_grow 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:43.206 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:43.464 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:43.464 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:43.722 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8e7bd860-0d1c-4176-b82f-f73093b9f845 00:17:43.722 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e7bd860-0d1c-4176-b82f-f73093b9f845 00:17:43.722 02:43:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:43.980 02:43:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:43.980 02:43:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:43.980 02:43:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8e7bd860-0d1c-4176-b82f-f73093b9f845 lvol 150 00:17:44.239 02:43:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=74f5ef9f-98be-4f17-bcb9-2d24467e3a26 00:17:44.239 02:43:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:44.239 02:43:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:44.498 [2024-05-15 02:43:47.618567] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:44.498 [2024-05-15 02:43:47.618643] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:44.498 true 00:17:44.498 02:43:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e7bd860-0d1c-4176-b82f-f73093b9f845 00:17:44.498 02:43:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:44.756 02:43:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:44.756 02:43:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:45.013 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 74f5ef9f-98be-4f17-bcb9-2d24467e3a26 00:17:45.271 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:45.529 [2024-05-15 02:43:48.589425] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:45.529 [2024-05-15 02:43:48.589819] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:45.529 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:45.787 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=801165 00:17:45.787 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:45.787 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:45.787 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 801165 /var/tmp/bdevperf.sock 00:17:45.787 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # '[' -z 801165 ']' 00:17:45.787 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.787 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:45.787 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.787 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:45.787 02:43:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:45.787 [2024-05-15 02:43:48.896252] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:17:45.787 [2024-05-15 02:43:48.896324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid801165 ] 00:17:45.787 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.787 [2024-05-15 02:43:48.997165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.787 [2024-05-15 02:43:49.048530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.044 02:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:46.044 02:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # return 0 00:17:46.044 02:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:46.302 Nvme0n1 00:17:46.302 02:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:46.559 [ 00:17:46.559 { 00:17:46.559 "name": "Nvme0n1", 00:17:46.559 "aliases": [ 00:17:46.559 "74f5ef9f-98be-4f17-bcb9-2d24467e3a26" 00:17:46.559 ], 00:17:46.559 "product_name": "NVMe disk", 00:17:46.559 "block_size": 4096, 00:17:46.559 "num_blocks": 38912, 00:17:46.559 "uuid": "74f5ef9f-98be-4f17-bcb9-2d24467e3a26", 00:17:46.559 "assigned_rate_limits": { 00:17:46.559 "rw_ios_per_sec": 0, 00:17:46.559 "rw_mbytes_per_sec": 0, 00:17:46.559 "r_mbytes_per_sec": 0, 00:17:46.559 "w_mbytes_per_sec": 0 00:17:46.559 }, 00:17:46.559 "claimed": false, 00:17:46.559 "zoned": false, 00:17:46.559 "supported_io_types": { 00:17:46.559 "read": true, 00:17:46.559 "write": true, 00:17:46.559 "unmap": true, 00:17:46.559 "write_zeroes": true, 00:17:46.559 "flush": true, 00:17:46.559 "reset": true, 00:17:46.559 "compare": true, 00:17:46.559 "compare_and_write": true, 00:17:46.559 "abort": true, 00:17:46.559 "nvme_admin": true, 00:17:46.559 "nvme_io": true 00:17:46.559 }, 00:17:46.559 "memory_domains": [ 00:17:46.559 { 00:17:46.559 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:46.559 "dma_device_type": 0 00:17:46.559 } 00:17:46.559 ], 00:17:46.559 "driver_specific": { 00:17:46.559 "nvme": [ 00:17:46.559 { 00:17:46.559 "trid": { 00:17:46.559 "trtype": "RDMA", 00:17:46.559 "adrfam": "IPv4", 00:17:46.559 "traddr": "192.168.100.8", 00:17:46.559 "trsvcid": "4420", 00:17:46.559 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:46.559 }, 00:17:46.559 "ctrlr_data": { 00:17:46.559 "cntlid": 1, 00:17:46.559 "vendor_id": "0x8086", 00:17:46.559 "model_number": "SPDK bdev Controller", 00:17:46.559 "serial_number": "SPDK0", 00:17:46.559 "firmware_revision": "24.05", 00:17:46.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:46.559 "oacs": { 00:17:46.559 "security": 0, 00:17:46.559 "format": 0, 00:17:46.559 "firmware": 0, 00:17:46.559 "ns_manage": 0 00:17:46.559 }, 00:17:46.559 "multi_ctrlr": true, 00:17:46.559 "ana_reporting": false 00:17:46.559 }, 00:17:46.559 "vs": { 00:17:46.559 "nvme_version": "1.3" 00:17:46.559 }, 00:17:46.559 "ns_data": { 00:17:46.559 "id": 1, 00:17:46.559 "can_share": true 00:17:46.559 } 00:17:46.559 } 00:17:46.559 ], 00:17:46.559 "mp_policy": "active_passive" 00:17:46.559 } 00:17:46.559 } 00:17:46.559 ] 00:17:46.559 02:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=801179 00:17:46.559 02:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:46.559 02:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:46.817 Running I/O for 10 seconds... 00:17:47.749 Latency(us) 00:17:47.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.749 Nvme0n1 : 1.00 22563.00 88.14 0.00 0.00 0.00 0.00 0.00 00:17:47.749 =================================================================================================================== 00:17:47.749 Total : 22563.00 88.14 0.00 0.00 0.00 0.00 0.00 00:17:47.749 00:17:48.681 02:43:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8e7bd860-0d1c-4176-b82f-f73093b9f845 00:17:48.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.681 Nvme0n1 : 2.00 22814.50 89.12 0.00 0.00 0.00 0.00 0.00 00:17:48.681 =================================================================================================================== 00:17:48.681 Total : 22814.50 89.12 0.00 0.00 0.00 0.00 0.00 00:17:48.681 00:17:48.938 true 00:17:48.939 02:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e7bd860-0d1c-4176-b82f-f73093b9f845 00:17:48.939 02:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:49.196 02:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:49.196 02:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:49.196 02:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 801179 00:17:49.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.760 Nvme0n1 : 3.00 22932.00 89.58 0.00 0.00 0.00 0.00 0.00 00:17:49.760 =================================================================================================================== 00:17:49.760 Total : 22932.00 89.58 0.00 0.00 0.00 0.00 0.00 00:17:49.760 00:17:50.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.693 Nvme0n1 : 4.00 23015.00 89.90 0.00 0.00 0.00 0.00 0.00 00:17:50.693 =================================================================================================================== 00:17:50.693 Total : 23015.00 89.90 0.00 0.00 0.00 0.00 0.00 00:17:50.693 00:17:51.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.625 Nvme0n1 : 5.00 23072.40 90.13 0.00 0.00 0.00 0.00 0.00 00:17:51.625 =================================================================================================================== 00:17:51.625 Total : 23072.40 90.13 0.00 0.00 0.00 0.00 0.00 00:17:51.625 00:17:52.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.998 Nvme0n1 : 6.00 23114.83 90.29 0.00 0.00 0.00 0.00 0.00 00:17:52.998 =================================================================================================================== 00:17:52.998 Total : 23114.83 90.29 0.00 0.00 0.00 0.00 0.00 00:17:52.998 00:17:53.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.931 Nvme0n1 : 7.00 23149.86 90.43 0.00 0.00 0.00 0.00 0.00 00:17:53.931 =================================================================================================================== 00:17:53.931 Total : 23149.86 90.43 0.00 0.00 0.00 0.00 0.00 00:17:53.931 00:17:54.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.862 Nvme0n1 : 8.00 23176.88 90.53 0.00 0.00 0.00 0.00 0.00 00:17:54.862 =================================================================================================================== 00:17:54.862 Total : 23176.88 90.53 0.00 0.00 0.00 0.00 0.00 00:17:54.862 00:17:55.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.794 Nvme0n1 : 9.00 23195.78 90.61 0.00 0.00 0.00 0.00 0.00 00:17:55.794 =================================================================================================================== 00:17:55.794 Total : 23195.78 90.61 0.00 0.00 0.00 0.00 0.00 00:17:55.794 00:17:56.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.727 Nvme0n1 : 10.00 23213.30 90.68 0.00 0.00 0.00 0.00 0.00 00:17:56.727 =================================================================================================================== 00:17:56.727 Total : 23213.30 90.68 0.00 0.00 0.00 0.00 0.00 00:17:56.727 00:17:56.727 00:17:56.727 Latency(us) 00:17:56.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.727 Nvme0n1 : 10.00 23214.08 90.68 0.00 0.00 5508.61 3732.70 19831.76 00:17:56.727 =================================================================================================================== 00:17:56.727 Total : 23214.08 90.68 0.00 0.00 5508.61 3732.70 19831.76 00:17:56.727 0 00:17:56.727 02:43:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 801165 00:17:56.727 02:43:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' -z 801165 ']' 00:17:56.727 02:43:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # kill -0 801165 00:17:56.727 02:43:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # uname 00:17:56.727 02:43:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:56.727 02:43:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 801165 00:17:56.727 02:43:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:17:56.727 02:43:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:17:56.727 02:43:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 801165' 00:17:56.727 killing process with pid 801165 00:17:56.727 02:43:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # kill 801165 00:17:56.727 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.727 00:17:56.727 Latency(us) 00:17:56.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.727 =================================================================================================================== 00:17:56.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.727 02:43:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # wait 801165 00:17:56.984 02:44:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:57.248 02:44:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:57.507 02:44:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e7bd860-0d1c-4176-b82f-f73093b9f845 00:17:57.507 02:44:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:57.765 02:44:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:57.765 02:44:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:57.765 02:44:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:58.022 [2024-05-15 02:44:01.192212] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:58.022 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e7bd860-0d1c-4176-b82f-f73093b9f845 00:17:58.022 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:17:58.022 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e7bd860-0d1c-4176-b82f-f73093b9f845 00:17:58.022 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:58.022 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:58.022 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:58.022 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:58.022 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:58.022 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:58.022 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:58.022 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:17:58.022 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e7bd860-0d1c-4176-b82f-f73093b9f845 00:17:58.279 request: 00:17:58.279 { 00:17:58.279 "uuid": "8e7bd860-0d1c-4176-b82f-f73093b9f845", 00:17:58.279 "method": "bdev_lvol_get_lvstores", 00:17:58.279 "req_id": 1 00:17:58.279 } 00:17:58.279 Got JSON-RPC error response 00:17:58.279 response: 00:17:58.279 { 00:17:58.279 "code": -19, 00:17:58.279 "message": "No such device" 00:17:58.279 } 00:17:58.279 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:17:58.279 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:58.279 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:58.279 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:58.279 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:58.592 aio_bdev 00:17:58.592 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 74f5ef9f-98be-4f17-bcb9-2d24467e3a26 00:17:58.592 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_name=74f5ef9f-98be-4f17-bcb9-2d24467e3a26 00:17:58.592 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:17:58.592 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local i 00:17:58.592 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:17:58.592 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:17:58.592 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:58.883 02:44:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 74f5ef9f-98be-4f17-bcb9-2d24467e3a26 -t 2000 00:17:59.146 [ 00:17:59.146 { 00:17:59.146 "name": "74f5ef9f-98be-4f17-bcb9-2d24467e3a26", 00:17:59.146 "aliases": [ 00:17:59.146 "lvs/lvol" 00:17:59.146 ], 00:17:59.146 "product_name": "Logical Volume", 00:17:59.146 "block_size": 4096, 00:17:59.146 "num_blocks": 38912, 00:17:59.146 "uuid": "74f5ef9f-98be-4f17-bcb9-2d24467e3a26", 00:17:59.146 "assigned_rate_limits": { 00:17:59.146 "rw_ios_per_sec": 0, 00:17:59.146 "rw_mbytes_per_sec": 0, 00:17:59.146 "r_mbytes_per_sec": 0, 00:17:59.146 "w_mbytes_per_sec": 0 00:17:59.146 }, 00:17:59.146 "claimed": false, 00:17:59.146 "zoned": false, 00:17:59.146 "supported_io_types": { 00:17:59.146 "read": true, 00:17:59.146 "write": true, 00:17:59.146 "unmap": true, 00:17:59.146 "write_zeroes": true, 00:17:59.146 "flush": false, 00:17:59.146 "reset": true, 00:17:59.146 "compare": false, 00:17:59.146 "compare_and_write": false, 00:17:59.146 "abort": false, 00:17:59.146 "nvme_admin": false, 00:17:59.146 "nvme_io": false 00:17:59.146 }, 00:17:59.146 "driver_specific": { 00:17:59.146 "lvol": { 00:17:59.146 "lvol_store_uuid": "8e7bd860-0d1c-4176-b82f-f73093b9f845", 00:17:59.146 "base_bdev": "aio_bdev", 00:17:59.146 "thin_provision": false, 00:17:59.146 "num_allocated_clusters": 38, 00:17:59.146 "snapshot": false, 00:17:59.146 "clone": false, 00:17:59.146 "esnap_clone": false 00:17:59.146 } 00:17:59.146 } 00:17:59.146 } 00:17:59.146 ] 00:17:59.146 02:44:02 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # return 0 00:17:59.146 02:44:02 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e7bd860-0d1c-4176-b82f-f73093b9f845 00:17:59.146 02:44:02 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:59.403 02:44:02 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:59.403 02:44:02 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e7bd860-0d1c-4176-b82f-f73093b9f845 00:17:59.403 02:44:02 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:59.661 02:44:02 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:59.661 02:44:02 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 74f5ef9f-98be-4f17-bcb9-2d24467e3a26 00:17:59.919 02:44:02 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e7bd860-0d1c-4176-b82f-f73093b9f845 00:18:00.177 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:00.434 00:18:00.434 real 0m17.143s 00:18:00.434 user 0m16.877s 00:18:00.434 sys 0m1.603s 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:00.434 ************************************ 00:18:00.434 END TEST lvs_grow_clean 00:18:00.434 ************************************ 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:00.434 ************************************ 00:18:00.434 START TEST lvs_grow_dirty 00:18:00.434 ************************************ 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # lvs_grow dirty 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:00.434 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:00.690 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:00.690 02:44:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:00.948 02:44:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:00.948 02:44:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:00.948 02:44:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:01.205 02:44:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:01.205 02:44:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:01.205 02:44:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc lvol 150 00:18:01.463 02:44:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b095ec39-0c63-4f93-a0f4-8045340f22c8 00:18:01.463 02:44:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:01.463 02:44:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:01.720 [2024-05-15 02:44:04.796428] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:01.720 [2024-05-15 02:44:04.796502] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:01.720 true 00:18:01.720 02:44:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:01.720 02:44:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:01.978 02:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:01.978 02:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:02.236 02:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b095ec39-0c63-4f93-a0f4-8045340f22c8 00:18:02.495 02:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:02.495 [2024-05-15 02:44:05.747523] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:02.495 02:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:02.753 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=803408 00:18:02.753 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:02.753 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:02.753 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 803408 /var/tmp/bdevperf.sock 00:18:02.753 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 803408 ']' 00:18:02.753 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.753 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:02.753 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.753 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:02.753 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:03.011 [2024-05-15 02:44:06.059283] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:03.011 [2024-05-15 02:44:06.059364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803408 ] 00:18:03.011 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.011 [2024-05-15 02:44:06.159556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.011 [2024-05-15 02:44:06.210919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.269 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:03.269 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:18:03.269 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:03.527 Nvme0n1 00:18:03.527 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:03.784 [ 00:18:03.784 { 00:18:03.784 "name": "Nvme0n1", 00:18:03.784 "aliases": [ 00:18:03.784 "b095ec39-0c63-4f93-a0f4-8045340f22c8" 00:18:03.784 ], 00:18:03.784 "product_name": "NVMe disk", 00:18:03.784 "block_size": 4096, 00:18:03.784 "num_blocks": 38912, 00:18:03.784 "uuid": "b095ec39-0c63-4f93-a0f4-8045340f22c8", 00:18:03.784 "assigned_rate_limits": { 00:18:03.784 "rw_ios_per_sec": 0, 00:18:03.784 "rw_mbytes_per_sec": 0, 00:18:03.784 "r_mbytes_per_sec": 0, 00:18:03.784 "w_mbytes_per_sec": 0 00:18:03.784 }, 00:18:03.784 "claimed": false, 00:18:03.784 "zoned": false, 00:18:03.784 "supported_io_types": { 00:18:03.784 "read": true, 00:18:03.784 "write": true, 00:18:03.784 "unmap": true, 00:18:03.784 "write_zeroes": true, 00:18:03.784 "flush": true, 00:18:03.785 "reset": true, 00:18:03.785 "compare": true, 00:18:03.785 "compare_and_write": true, 00:18:03.785 "abort": true, 00:18:03.785 "nvme_admin": true, 00:18:03.785 "nvme_io": true 00:18:03.785 }, 00:18:03.785 "memory_domains": [ 00:18:03.785 { 00:18:03.785 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:03.785 "dma_device_type": 0 00:18:03.785 } 00:18:03.785 ], 00:18:03.785 "driver_specific": { 00:18:03.785 "nvme": [ 00:18:03.785 { 00:18:03.785 "trid": { 00:18:03.785 "trtype": "RDMA", 00:18:03.785 "adrfam": "IPv4", 00:18:03.785 "traddr": "192.168.100.8", 00:18:03.785 "trsvcid": "4420", 00:18:03.785 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:03.785 }, 00:18:03.785 "ctrlr_data": { 00:18:03.785 "cntlid": 1, 00:18:03.785 "vendor_id": "0x8086", 00:18:03.785 "model_number": "SPDK bdev Controller", 00:18:03.785 "serial_number": "SPDK0", 00:18:03.785 "firmware_revision": "24.05", 00:18:03.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:03.785 "oacs": { 00:18:03.785 "security": 0, 00:18:03.785 "format": 0, 00:18:03.785 "firmware": 0, 00:18:03.785 "ns_manage": 0 00:18:03.785 }, 00:18:03.785 "multi_ctrlr": true, 00:18:03.785 "ana_reporting": false 00:18:03.785 }, 00:18:03.785 "vs": { 00:18:03.785 "nvme_version": "1.3" 00:18:03.785 }, 00:18:03.785 "ns_data": { 00:18:03.785 "id": 1, 00:18:03.785 "can_share": true 00:18:03.785 } 00:18:03.785 } 00:18:03.785 ], 00:18:03.785 "mp_policy": "active_passive" 00:18:03.785 } 00:18:03.785 } 00:18:03.785 ] 00:18:03.785 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=803533 00:18:03.785 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:03.785 02:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:03.785 Running I/O for 10 seconds... 00:18:04.718 Latency(us) 00:18:04.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.718 Nvme0n1 : 1.00 22560.00 88.12 0.00 0.00 0.00 0.00 0.00 00:18:04.718 =================================================================================================================== 00:18:04.718 Total : 22560.00 88.12 0.00 0.00 0.00 0.00 0.00 00:18:04.718 00:18:05.651 02:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:05.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.909 Nvme0n1 : 2.00 22784.00 89.00 0.00 0.00 0.00 0.00 0.00 00:18:05.909 =================================================================================================================== 00:18:05.909 Total : 22784.00 89.00 0.00 0.00 0.00 0.00 0.00 00:18:05.909 00:18:05.909 true 00:18:05.909 02:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:05.909 02:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:06.167 02:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:06.167 02:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:06.167 02:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 803533 00:18:06.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.732 Nvme0n1 : 3.00 22901.33 89.46 0.00 0.00 0.00 0.00 0.00 00:18:06.732 =================================================================================================================== 00:18:06.732 Total : 22901.33 89.46 0.00 0.00 0.00 0.00 0.00 00:18:06.732 00:18:08.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.105 Nvme0n1 : 4.00 22992.00 89.81 0.00 0.00 0.00 0.00 0.00 00:18:08.105 =================================================================================================================== 00:18:08.105 Total : 22992.00 89.81 0.00 0.00 0.00 0.00 0.00 00:18:08.105 00:18:09.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.038 Nvme0n1 : 5.00 23052.60 90.05 0.00 0.00 0.00 0.00 0.00 00:18:09.038 =================================================================================================================== 00:18:09.038 Total : 23052.60 90.05 0.00 0.00 0.00 0.00 0.00 00:18:09.038 00:18:09.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.971 Nvme0n1 : 6.00 23099.50 90.23 0.00 0.00 0.00 0.00 0.00 00:18:09.971 =================================================================================================================== 00:18:09.971 Total : 23099.50 90.23 0.00 0.00 0.00 0.00 0.00 00:18:09.971 00:18:10.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.904 Nvme0n1 : 7.00 23132.14 90.36 0.00 0.00 0.00 0.00 0.00 00:18:10.904 =================================================================================================================== 00:18:10.904 Total : 23132.14 90.36 0.00 0.00 0.00 0.00 0.00 00:18:10.904 00:18:11.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.837 Nvme0n1 : 8.00 23163.88 90.48 0.00 0.00 0.00 0.00 0.00 00:18:11.837 =================================================================================================================== 00:18:11.837 Total : 23163.88 90.48 0.00 0.00 0.00 0.00 0.00 00:18:11.837 00:18:12.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.769 Nvme0n1 : 9.00 23185.67 90.57 0.00 0.00 0.00 0.00 0.00 00:18:12.769 =================================================================================================================== 00:18:12.769 Total : 23185.67 90.57 0.00 0.00 0.00 0.00 0.00 00:18:12.769 00:18:14.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.144 Nvme0n1 : 10.00 23203.60 90.64 0.00 0.00 0.00 0.00 0.00 00:18:14.144 =================================================================================================================== 00:18:14.144 Total : 23203.60 90.64 0.00 0.00 0.00 0.00 0.00 00:18:14.144 00:18:14.144 00:18:14.144 Latency(us) 00:18:14.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.144 Nvme0n1 : 10.00 23204.86 90.64 0.00 0.00 5510.78 3875.17 19945.74 00:18:14.144 =================================================================================================================== 00:18:14.145 Total : 23204.86 90.64 0.00 0.00 5510.78 3875.17 19945.74 00:18:14.145 0 00:18:14.145 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 803408 00:18:14.145 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' -z 803408 ']' 00:18:14.145 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # kill -0 803408 00:18:14.145 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # uname 00:18:14.145 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:14.145 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 803408 00:18:14.145 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:14.145 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:14.145 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # echo 'killing process with pid 803408' 00:18:14.145 killing process with pid 803408 00:18:14.145 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # kill 803408 00:18:14.145 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.145 00:18:14.145 Latency(us) 00:18:14.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.145 =================================================================================================================== 00:18:14.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.145 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # wait 803408 00:18:14.145 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:14.404 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:14.663 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:14.663 02:44:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:14.922 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:14.922 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:14.922 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 800598 00:18:14.922 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 800598 00:18:14.922 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 800598 Killed "${NVMF_APP[@]}" "$@" 00:18:14.922 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:14.922 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:14.922 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.922 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:14.922 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:14.922 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=805031 00:18:14.922 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 805031 00:18:14.922 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:14.923 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 805031 ']' 00:18:14.923 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.923 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:14.923 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.923 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:14.923 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:14.923 [2024-05-15 02:44:18.169431] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:14.923 [2024-05-15 02:44:18.169507] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.181 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.181 [2024-05-15 02:44:18.279764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.181 [2024-05-15 02:44:18.329821] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.181 [2024-05-15 02:44:18.329868] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.181 [2024-05-15 02:44:18.329883] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.181 [2024-05-15 02:44:18.329902] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.181 [2024-05-15 02:44:18.329913] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.181 [2024-05-15 02:44:18.329952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.181 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:15.181 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:18:15.181 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.181 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:15.181 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:15.439 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.439 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:15.439 [2024-05-15 02:44:18.713151] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:15.439 [2024-05-15 02:44:18.713249] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:15.439 [2024-05-15 02:44:18.713291] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:15.697 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:15.697 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b095ec39-0c63-4f93-a0f4-8045340f22c8 00:18:15.697 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=b095ec39-0c63-4f93-a0f4-8045340f22c8 00:18:15.697 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:18:15.697 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:18:15.697 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:18:15.697 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:18:15.697 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:15.697 02:44:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b095ec39-0c63-4f93-a0f4-8045340f22c8 -t 2000 00:18:15.955 [ 00:18:15.955 { 00:18:15.955 "name": "b095ec39-0c63-4f93-a0f4-8045340f22c8", 00:18:15.955 "aliases": [ 00:18:15.955 "lvs/lvol" 00:18:15.955 ], 00:18:15.955 "product_name": "Logical Volume", 00:18:15.955 "block_size": 4096, 00:18:15.955 "num_blocks": 38912, 00:18:15.955 "uuid": "b095ec39-0c63-4f93-a0f4-8045340f22c8", 00:18:15.955 "assigned_rate_limits": { 00:18:15.955 "rw_ios_per_sec": 0, 00:18:15.955 "rw_mbytes_per_sec": 0, 00:18:15.955 "r_mbytes_per_sec": 0, 00:18:15.955 "w_mbytes_per_sec": 0 00:18:15.955 }, 00:18:15.955 "claimed": false, 00:18:15.955 "zoned": false, 00:18:15.955 "supported_io_types": { 00:18:15.955 "read": true, 00:18:15.955 "write": true, 00:18:15.955 "unmap": true, 00:18:15.955 "write_zeroes": true, 00:18:15.955 "flush": false, 00:18:15.955 "reset": true, 00:18:15.955 "compare": false, 00:18:15.955 "compare_and_write": false, 00:18:15.955 "abort": false, 00:18:15.955 "nvme_admin": false, 00:18:15.955 "nvme_io": false 00:18:15.955 }, 00:18:15.955 "driver_specific": { 00:18:15.955 "lvol": { 00:18:15.955 "lvol_store_uuid": "91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc", 00:18:15.955 "base_bdev": "aio_bdev", 00:18:15.955 "thin_provision": false, 00:18:15.955 "num_allocated_clusters": 38, 00:18:15.955 "snapshot": false, 00:18:15.955 "clone": false, 00:18:15.955 "esnap_clone": false 00:18:15.955 } 00:18:15.955 } 00:18:15.955 } 00:18:15.955 ] 00:18:15.955 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:18:15.955 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:15.955 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:16.213 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:16.213 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:16.213 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:16.470 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:16.470 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:16.728 [2024-05-15 02:44:19.925867] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:16.728 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:16.728 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:18:16.728 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:16.728 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:16.728 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:16.728 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:16.728 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:16.728 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:16.728 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:16.728 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:16.728 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:16.728 02:44:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:16.986 request: 00:18:16.986 { 00:18:16.986 "uuid": "91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc", 00:18:16.986 "method": "bdev_lvol_get_lvstores", 00:18:16.986 "req_id": 1 00:18:16.986 } 00:18:16.986 Got JSON-RPC error response 00:18:16.986 response: 00:18:16.986 { 00:18:16.986 "code": -19, 00:18:16.986 "message": "No such device" 00:18:16.986 } 00:18:16.986 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:18:16.986 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:16.986 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:16.986 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:16.986 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:17.245 aio_bdev 00:18:17.245 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b095ec39-0c63-4f93-a0f4-8045340f22c8 00:18:17.245 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=b095ec39-0c63-4f93-a0f4-8045340f22c8 00:18:17.245 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:18:17.245 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:18:17.245 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:18:17.245 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:18:17.245 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:17.503 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b095ec39-0c63-4f93-a0f4-8045340f22c8 -t 2000 00:18:17.761 [ 00:18:17.761 { 00:18:17.761 "name": "b095ec39-0c63-4f93-a0f4-8045340f22c8", 00:18:17.761 "aliases": [ 00:18:17.761 "lvs/lvol" 00:18:17.761 ], 00:18:17.761 "product_name": "Logical Volume", 00:18:17.761 "block_size": 4096, 00:18:17.761 "num_blocks": 38912, 00:18:17.761 "uuid": "b095ec39-0c63-4f93-a0f4-8045340f22c8", 00:18:17.761 "assigned_rate_limits": { 00:18:17.761 "rw_ios_per_sec": 0, 00:18:17.761 "rw_mbytes_per_sec": 0, 00:18:17.761 "r_mbytes_per_sec": 0, 00:18:17.761 "w_mbytes_per_sec": 0 00:18:17.761 }, 00:18:17.761 "claimed": false, 00:18:17.761 "zoned": false, 00:18:17.761 "supported_io_types": { 00:18:17.761 "read": true, 00:18:17.761 "write": true, 00:18:17.761 "unmap": true, 00:18:17.761 "write_zeroes": true, 00:18:17.761 "flush": false, 00:18:17.762 "reset": true, 00:18:17.762 "compare": false, 00:18:17.762 "compare_and_write": false, 00:18:17.762 "abort": false, 00:18:17.762 "nvme_admin": false, 00:18:17.762 "nvme_io": false 00:18:17.762 }, 00:18:17.762 "driver_specific": { 00:18:17.762 "lvol": { 00:18:17.762 "lvol_store_uuid": "91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc", 00:18:17.762 "base_bdev": "aio_bdev", 00:18:17.762 "thin_provision": false, 00:18:17.762 "num_allocated_clusters": 38, 00:18:17.762 "snapshot": false, 00:18:17.762 "clone": false, 00:18:17.762 "esnap_clone": false 00:18:17.762 } 00:18:17.762 } 00:18:17.762 } 00:18:17.762 ] 00:18:17.762 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:18:17.762 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:17.762 02:44:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:18.018 02:44:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:18.018 02:44:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:18.018 02:44:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:18.275 02:44:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:18.275 02:44:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b095ec39-0c63-4f93-a0f4-8045340f22c8 00:18:18.579 02:44:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 91a9a76d-bfe1-4beb-a75f-3e23d2bcb8bc 00:18:18.878 02:44:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:19.144 00:18:19.144 real 0m18.584s 00:18:19.144 user 0m48.344s 00:18:19.144 sys 0m3.542s 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:19.144 ************************************ 00:18:19.144 END TEST lvs_grow_dirty 00:18:19.144 ************************************ 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # type=--id 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # id=0 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # for n in $shm_files 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:19.144 nvmf_trace.0 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # return 0 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:19.144 rmmod nvme_rdma 00:18:19.144 rmmod nvme_fabrics 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 805031 ']' 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 805031 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' -z 805031 ']' 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # kill -0 805031 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # uname 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 805031 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # echo 'killing process with pid 805031' 00:18:19.144 killing process with pid 805031 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # kill 805031 00:18:19.144 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # wait 805031 00:18:19.403 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:19.403 02:44:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:19.403 00:18:19.403 real 0m43.389s 00:18:19.403 user 1m11.667s 00:18:19.403 sys 0m10.646s 00:18:19.403 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:19.403 02:44:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:19.403 ************************************ 00:18:19.403 END TEST nvmf_lvs_grow 00:18:19.403 ************************************ 00:18:19.403 02:44:22 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:19.403 02:44:22 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:19.403 02:44:22 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:19.403 02:44:22 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:19.661 ************************************ 00:18:19.661 START TEST nvmf_bdev_io_wait 00:18:19.661 ************************************ 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:19.661 * Looking for test storage... 00:18:19.661 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:19.661 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.662 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.662 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.662 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:19.662 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:19.662 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:19.662 02:44:22 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:26.221 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:26.221 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:26.221 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:26.222 Found net devices under 0000:18:00.0: mlx_0_0 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:26.222 Found net devices under 0000:18:00.1: mlx_0_1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:26.222 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:26.222 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:18:26.222 altname enp24s0f0np0 00:18:26.222 altname ens785f0np0 00:18:26.222 inet 192.168.100.8/24 scope global mlx_0_0 00:18:26.222 valid_lft forever preferred_lft forever 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:26.222 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:26.222 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:18:26.222 altname enp24s0f1np1 00:18:26.222 altname ens785f1np1 00:18:26.222 inet 192.168.100.9/24 scope global mlx_0_1 00:18:26.222 valid_lft forever preferred_lft forever 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:26.222 192.168.100.9' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:26.222 192.168.100.9' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:26.222 192.168.100.9' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:26.222 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:26.223 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:26.223 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:26.223 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:26.223 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=808526 00:18:26.223 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:26.223 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 808526 00:18:26.223 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # '[' -z 808526 ']' 00:18:26.223 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.223 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:26.223 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.223 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:26.223 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:26.223 [2024-05-15 02:44:29.340302] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:26.223 [2024-05-15 02:44:29.340371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.223 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.223 [2024-05-15 02:44:29.450727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:26.223 [2024-05-15 02:44:29.499667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.223 [2024-05-15 02:44:29.499717] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.223 [2024-05-15 02:44:29.499732] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.223 [2024-05-15 02:44:29.499745] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.223 [2024-05-15 02:44:29.499756] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.223 [2024-05-15 02:44:29.499855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.223 [2024-05-15 02:44:29.499951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.223 [2024-05-15 02:44:29.499995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.223 [2024-05-15 02:44:29.499995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # return 0 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.478 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:26.478 [2024-05-15 02:44:29.702708] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1128cb0/0x112d1a0) succeed. 00:18:26.478 [2024-05-15 02:44:29.717290] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x112a2f0/0x116e830) succeed. 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:26.735 Malloc0 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:26.735 [2024-05-15 02:44:29.929869] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:26.735 [2024-05-15 02:44:29.930248] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=808575 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=808577 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.735 { 00:18:26.735 "params": { 00:18:26.735 "name": "Nvme$subsystem", 00:18:26.735 "trtype": "$TEST_TRANSPORT", 00:18:26.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.735 "adrfam": "ipv4", 00:18:26.735 "trsvcid": "$NVMF_PORT", 00:18:26.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.735 "hdgst": ${hdgst:-false}, 00:18:26.735 "ddgst": ${ddgst:-false} 00:18:26.735 }, 00:18:26.735 "method": "bdev_nvme_attach_controller" 00:18:26.735 } 00:18:26.735 EOF 00:18:26.735 )") 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=808579 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.735 { 00:18:26.735 "params": { 00:18:26.735 "name": "Nvme$subsystem", 00:18:26.735 "trtype": "$TEST_TRANSPORT", 00:18:26.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.735 "adrfam": "ipv4", 00:18:26.735 "trsvcid": "$NVMF_PORT", 00:18:26.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.735 "hdgst": ${hdgst:-false}, 00:18:26.735 "ddgst": ${ddgst:-false} 00:18:26.735 }, 00:18:26.735 "method": "bdev_nvme_attach_controller" 00:18:26.735 } 00:18:26.735 EOF 00:18:26.735 )") 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=808582 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.735 { 00:18:26.735 "params": { 00:18:26.735 "name": "Nvme$subsystem", 00:18:26.735 "trtype": "$TEST_TRANSPORT", 00:18:26.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.735 "adrfam": "ipv4", 00:18:26.735 "trsvcid": "$NVMF_PORT", 00:18:26.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.735 "hdgst": ${hdgst:-false}, 00:18:26.735 "ddgst": ${ddgst:-false} 00:18:26.735 }, 00:18:26.735 "method": "bdev_nvme_attach_controller" 00:18:26.735 } 00:18:26.735 EOF 00:18:26.735 )") 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:26.735 { 00:18:26.735 "params": { 00:18:26.735 "name": "Nvme$subsystem", 00:18:26.735 "trtype": "$TEST_TRANSPORT", 00:18:26.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:26.735 "adrfam": "ipv4", 00:18:26.735 "trsvcid": "$NVMF_PORT", 00:18:26.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:26.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:26.735 "hdgst": ${hdgst:-false}, 00:18:26.735 "ddgst": ${ddgst:-false} 00:18:26.735 }, 00:18:26.735 "method": "bdev_nvme_attach_controller" 00:18:26.735 } 00:18:26.735 EOF 00:18:26.735 )") 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 808575 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:26.735 "params": { 00:18:26.735 "name": "Nvme1", 00:18:26.735 "trtype": "rdma", 00:18:26.735 "traddr": "192.168.100.8", 00:18:26.735 "adrfam": "ipv4", 00:18:26.735 "trsvcid": "4420", 00:18:26.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:26.735 "hdgst": false, 00:18:26.735 "ddgst": false 00:18:26.735 }, 00:18:26.735 "method": "bdev_nvme_attach_controller" 00:18:26.735 }' 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:26.735 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:26.735 "params": { 00:18:26.735 "name": "Nvme1", 00:18:26.735 "trtype": "rdma", 00:18:26.735 "traddr": "192.168.100.8", 00:18:26.736 "adrfam": "ipv4", 00:18:26.736 "trsvcid": "4420", 00:18:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:26.736 "hdgst": false, 00:18:26.736 "ddgst": false 00:18:26.736 }, 00:18:26.736 "method": "bdev_nvme_attach_controller" 00:18:26.736 }' 00:18:26.736 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:26.736 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:26.736 "params": { 00:18:26.736 "name": "Nvme1", 00:18:26.736 "trtype": "rdma", 00:18:26.736 "traddr": "192.168.100.8", 00:18:26.736 "adrfam": "ipv4", 00:18:26.736 "trsvcid": "4420", 00:18:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:26.736 "hdgst": false, 00:18:26.736 "ddgst": false 00:18:26.736 }, 00:18:26.736 "method": "bdev_nvme_attach_controller" 00:18:26.736 }' 00:18:26.736 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:26.736 02:44:29 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:26.736 "params": { 00:18:26.736 "name": "Nvme1", 00:18:26.736 "trtype": "rdma", 00:18:26.736 "traddr": "192.168.100.8", 00:18:26.736 "adrfam": "ipv4", 00:18:26.736 "trsvcid": "4420", 00:18:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:26.736 "hdgst": false, 00:18:26.736 "ddgst": false 00:18:26.736 }, 00:18:26.736 "method": "bdev_nvme_attach_controller" 00:18:26.736 }' 00:18:26.736 [2024-05-15 02:44:29.985003] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:26.736 [2024-05-15 02:44:29.985078] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:26.736 [2024-05-15 02:44:29.986514] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:26.736 [2024-05-15 02:44:29.986587] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:26.736 [2024-05-15 02:44:29.987217] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:26.736 [2024-05-15 02:44:29.987282] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:26.736 [2024-05-15 02:44:29.990397] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:26.736 [2024-05-15 02:44:29.990467] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:26.992 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.992 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.992 [2024-05-15 02:44:30.195839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.992 [2024-05-15 02:44:30.227308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:26.992 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.249 [2024-05-15 02:44:30.298456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.249 [2024-05-15 02:44:30.329435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:27.249 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.249 [2024-05-15 02:44:30.407236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.249 [2024-05-15 02:44:30.439499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:27.249 [2024-05-15 02:44:30.456215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.249 [2024-05-15 02:44:30.486797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:27.505 Running I/O for 1 seconds... 00:18:27.505 Running I/O for 1 seconds... 00:18:27.505 Running I/O for 1 seconds... 00:18:27.505 Running I/O for 1 seconds... 00:18:28.437 00:18:28.437 Latency(us) 00:18:28.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.437 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:28.437 Nvme1n1 : 1.00 170892.53 667.55 0.00 0.00 746.17 299.19 2421.98 00:18:28.437 =================================================================================================================== 00:18:28.437 Total : 170892.53 667.55 0.00 0.00 746.17 299.19 2421.98 00:18:28.437 00:18:28.437 Latency(us) 00:18:28.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.437 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:28.437 Nvme1n1 : 1.01 14671.75 57.31 0.00 0.00 8691.34 4929.45 16184.54 00:18:28.437 =================================================================================================================== 00:18:28.437 Total : 14671.75 57.31 0.00 0.00 8691.34 4929.45 16184.54 00:18:28.437 00:18:28.437 Latency(us) 00:18:28.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.437 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:28.437 Nvme1n1 : 1.01 13365.91 52.21 0.00 0.00 9543.13 5983.72 21655.37 00:18:28.437 =================================================================================================================== 00:18:28.437 Total : 13365.91 52.21 0.00 0.00 9543.13 5983.72 21655.37 00:18:28.437 00:18:28.437 Latency(us) 00:18:28.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.437 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:28.437 Nvme1n1 : 1.01 12685.27 49.55 0.00 0.00 10043.24 6439.62 21655.37 00:18:28.437 =================================================================================================================== 00:18:28.437 Total : 12685.27 49.55 0.00 0.00 10043.24 6439.62 21655.37 00:18:28.694 02:44:31 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 808577 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 808579 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 808582 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:28.951 rmmod nvme_rdma 00:18:28.951 rmmod nvme_fabrics 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 808526 ']' 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 808526 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' -z 808526 ']' 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # kill -0 808526 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # uname 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 808526 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # echo 'killing process with pid 808526' 00:18:28.951 killing process with pid 808526 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # kill 808526 00:18:28.951 [2024-05-15 02:44:32.189626] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:28.951 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # wait 808526 00:18:29.208 [2024-05-15 02:44:32.295826] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:18:29.208 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:29.208 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:29.208 00:18:29.208 real 0m9.782s 00:18:29.208 user 0m19.050s 00:18:29.208 sys 0m6.556s 00:18:29.208 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:29.208 02:44:32 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:29.208 ************************************ 00:18:29.208 END TEST nvmf_bdev_io_wait 00:18:29.208 ************************************ 00:18:29.467 02:44:32 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:29.467 02:44:32 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:29.467 02:44:32 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:29.467 02:44:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:29.467 ************************************ 00:18:29.467 START TEST nvmf_queue_depth 00:18:29.467 ************************************ 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:29.467 * Looking for test storage... 00:18:29.467 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:29.467 02:44:32 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:36.029 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:36.029 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:36.029 Found net devices under 0000:18:00.0: mlx_0_0 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:36.029 Found net devices under 0000:18:00.1: mlx_0_1 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:36.029 02:44:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:36.029 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:36.030 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:36.030 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:18:36.030 altname enp24s0f0np0 00:18:36.030 altname ens785f0np0 00:18:36.030 inet 192.168.100.8/24 scope global mlx_0_0 00:18:36.030 valid_lft forever preferred_lft forever 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:36.030 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:36.030 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:18:36.030 altname enp24s0f1np1 00:18:36.030 altname ens785f1np1 00:18:36.030 inet 192.168.100.9/24 scope global mlx_0_1 00:18:36.030 valid_lft forever preferred_lft forever 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:36.030 192.168.100.9' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:36.030 192.168.100.9' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:36.030 192.168.100.9' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=811851 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 811851 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 811851 ']' 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:36.030 02:44:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:36.030 [2024-05-15 02:44:39.309578] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:36.030 [2024-05-15 02:44:39.309654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.289 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.289 [2024-05-15 02:44:39.413161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.289 [2024-05-15 02:44:39.463798] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.289 [2024-05-15 02:44:39.463846] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.289 [2024-05-15 02:44:39.463861] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.289 [2024-05-15 02:44:39.463874] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.289 [2024-05-15 02:44:39.463885] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.289 [2024-05-15 02:44:39.463955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.855 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:36.855 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:18:36.855 02:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:36.855 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:36.855 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:37.113 02:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:37.114 [2024-05-15 02:44:40.209502] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xed4f50/0xed9440) succeed. 00:18:37.114 [2024-05-15 02:44:40.223007] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xed6450/0xf1aad0) succeed. 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:37.114 Malloc0 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:37.114 [2024-05-15 02:44:40.321848] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:37.114 [2024-05-15 02:44:40.322210] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=811941 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 811941 /var/tmp/bdevperf.sock 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 811941 ']' 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:37.114 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:37.114 [2024-05-15 02:44:40.375415] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:37.114 [2024-05-15 02:44:40.375482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811941 ] 00:18:37.371 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.371 [2024-05-15 02:44:40.483689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.371 [2024-05-15 02:44:40.532339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.371 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:37.371 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:18:37.371 02:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:37.371 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.371 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:37.628 NVMe0n1 00:18:37.628 02:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.628 02:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:37.628 Running I/O for 10 seconds... 00:18:49.823 00:18:49.823 Latency(us) 00:18:49.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.823 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:49.823 Verification LBA range: start 0x0 length 0x4000 00:18:49.823 NVMe0n1 : 10.06 11455.89 44.75 0.00 0.00 88989.50 19603.81 56303.97 00:18:49.823 =================================================================================================================== 00:18:49.823 Total : 11455.89 44.75 0.00 0.00 88989.50 19603.81 56303.97 00:18:49.823 0 00:18:49.823 02:44:50 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 811941 00:18:49.823 02:44:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 811941 ']' 00:18:49.823 02:44:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 811941 00:18:49.823 02:44:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:18:49.823 02:44:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:49.823 02:44:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 811941 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 811941' 00:18:49.823 killing process with pid 811941 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 811941 00:18:49.823 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.823 00:18:49.823 Latency(us) 00:18:49.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.823 =================================================================================================================== 00:18:49.823 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 811941 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:49.823 rmmod nvme_rdma 00:18:49.823 rmmod nvme_fabrics 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 811851 ']' 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 811851 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 811851 ']' 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 811851 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 811851 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 811851' 00:18:49.823 killing process with pid 811851 00:18:49.823 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 811851 00:18:49.823 [2024-05-15 02:44:51.306721] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:49.824 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 811851 00:18:49.824 [2024-05-15 02:44:51.360330] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:18:49.824 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.824 02:44:51 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:49.824 00:18:49.824 real 0m18.995s 00:18:49.824 user 0m25.139s 00:18:49.824 sys 0m5.753s 00:18:49.824 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:49.824 02:44:51 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:49.824 ************************************ 00:18:49.824 END TEST nvmf_queue_depth 00:18:49.824 ************************************ 00:18:49.824 02:44:51 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:18:49.824 02:44:51 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:49.824 02:44:51 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:49.824 02:44:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:49.824 ************************************ 00:18:49.824 START TEST nvmf_target_multipath 00:18:49.824 ************************************ 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:18:49.824 * Looking for test storage... 00:18:49.824 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.824 02:44:51 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:55.091 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:55.092 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:55.092 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:55.092 Found net devices under 0000:18:00.0: mlx_0_0 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:55.092 Found net devices under 0000:18:00.1: mlx_0_1 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:55.092 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:55.092 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:18:55.092 altname enp24s0f0np0 00:18:55.092 altname ens785f0np0 00:18:55.092 inet 192.168.100.8/24 scope global mlx_0_0 00:18:55.092 valid_lft forever preferred_lft forever 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:55.092 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:55.092 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:18:55.092 altname enp24s0f1np1 00:18:55.092 altname ens785f1np1 00:18:55.092 inet 192.168.100.9/24 scope global mlx_0_1 00:18:55.092 valid_lft forever preferred_lft forever 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:55.092 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:55.093 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:18:55.093 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:55.093 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:55.093 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:55.093 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:55.093 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:55.093 02:44:57 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:55.093 192.168.100.9' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:55.093 192.168.100.9' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:55.093 192.168.100.9' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:18:55.093 run this test only with TCP transport for now 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:55.093 rmmod nvme_rdma 00:18:55.093 rmmod nvme_fabrics 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:55.093 00:18:55.093 real 0m6.475s 00:18:55.093 user 0m1.753s 00:18:55.093 sys 0m4.872s 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:55.093 02:44:58 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:55.093 ************************************ 00:18:55.093 END TEST nvmf_target_multipath 00:18:55.093 ************************************ 00:18:55.093 02:44:58 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:18:55.093 02:44:58 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:55.093 02:44:58 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:55.093 02:44:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:55.093 ************************************ 00:18:55.093 START TEST nvmf_zcopy 00:18:55.093 ************************************ 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:18:55.093 * Looking for test storage... 00:18:55.093 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:55.093 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:55.094 02:44:58 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:01.731 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:01.731 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:01.731 Found net devices under 0000:18:00.0: mlx_0_0 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:01.731 Found net devices under 0000:18:00.1: mlx_0_1 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:01.731 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:01.732 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:01.732 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:19:01.732 altname enp24s0f0np0 00:19:01.732 altname ens785f0np0 00:19:01.732 inet 192.168.100.8/24 scope global mlx_0_0 00:19:01.732 valid_lft forever preferred_lft forever 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:01.732 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:01.732 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:19:01.732 altname enp24s0f1np1 00:19:01.732 altname ens785f1np1 00:19:01.732 inet 192.168.100.9/24 scope global mlx_0_1 00:19:01.732 valid_lft forever preferred_lft forever 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:01.732 192.168.100.9' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:01.732 192.168.100.9' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:01.732 192.168.100.9' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=819379 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 819379 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@828 -- # '[' -z 819379 ']' 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:01.732 02:45:04 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:01.732 [2024-05-15 02:45:04.904149] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:19:01.732 [2024-05-15 02:45:04.904218] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.732 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.732 [2024-05-15 02:45:05.003082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.990 [2024-05-15 02:45:05.050505] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.990 [2024-05-15 02:45:05.050556] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.990 [2024-05-15 02:45:05.050571] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.990 [2024-05-15 02:45:05.050585] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.990 [2024-05-15 02:45:05.050595] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.990 [2024-05-15 02:45:05.050626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@861 -- # return 0 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:19:01.990 Unsupported transport: rdma 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@805 -- # type=--id 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # id=0 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@817 -- # for n in $shm_files 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:01.990 nvmf_trace.0 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@820 -- # return 0 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:01.990 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:01.990 rmmod nvme_rdma 00:19:02.247 rmmod nvme_fabrics 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 819379 ']' 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 819379 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' -z 819379 ']' 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@951 -- # kill -0 819379 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # uname 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 819379 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@965 -- # echo 'killing process with pid 819379' 00:19:02.247 killing process with pid 819379 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@966 -- # kill 819379 00:19:02.247 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@971 -- # wait 819379 00:19:02.505 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:02.505 02:45:05 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:02.505 00:19:02.505 real 0m7.327s 00:19:02.505 user 0m2.648s 00:19:02.505 sys 0m5.337s 00:19:02.505 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:02.505 02:45:05 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:02.505 ************************************ 00:19:02.505 END TEST nvmf_zcopy 00:19:02.505 ************************************ 00:19:02.505 02:45:05 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:02.505 02:45:05 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:02.505 02:45:05 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:02.505 02:45:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:02.505 ************************************ 00:19:02.505 START TEST nvmf_nmic 00:19:02.505 ************************************ 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:02.505 * Looking for test storage... 00:19:02.505 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:02.505 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.506 02:45:05 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.506 02:45:05 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.763 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:02.763 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:02.763 02:45:05 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:02.763 02:45:05 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:09.361 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:09.361 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:09.361 Found net devices under 0000:18:00.0: mlx_0_0 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.361 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:09.361 Found net devices under 0000:18:00.1: mlx_0_1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:09.362 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:09.362 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:19:09.362 altname enp24s0f0np0 00:19:09.362 altname ens785f0np0 00:19:09.362 inet 192.168.100.8/24 scope global mlx_0_0 00:19:09.362 valid_lft forever preferred_lft forever 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:09.362 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:09.362 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:19:09.362 altname enp24s0f1np1 00:19:09.362 altname ens785f1np1 00:19:09.362 inet 192.168.100.9/24 scope global mlx_0_1 00:19:09.362 valid_lft forever preferred_lft forever 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:09.362 192.168.100.9' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:09.362 192.168.100.9' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:09.362 192.168.100.9' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=822544 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 822544 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@828 -- # '[' -z 822544 ']' 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:09.362 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.362 [2024-05-15 02:45:12.495806] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:19:09.362 [2024-05-15 02:45:12.495884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.362 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.362 [2024-05-15 02:45:12.606652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:09.620 [2024-05-15 02:45:12.660498] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.620 [2024-05-15 02:45:12.660546] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.620 [2024-05-15 02:45:12.660561] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.620 [2024-05-15 02:45:12.660574] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.620 [2024-05-15 02:45:12.660585] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.620 [2024-05-15 02:45:12.660641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.620 [2024-05-15 02:45:12.660724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.620 [2024-05-15 02:45:12.660827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.620 [2024-05-15 02:45:12.660828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.620 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:09.620 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@861 -- # return 0 00:19:09.620 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:09.620 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:09.620 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.620 02:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.620 02:45:12 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:09.620 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.620 02:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.620 [2024-05-15 02:45:12.860506] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xea2d70/0xea7260) succeed. 00:19:09.620 [2024-05-15 02:45:12.875522] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xea43b0/0xee88f0) succeed. 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.878 Malloc0 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.878 [2024-05-15 02:45:13.079013] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:09.878 [2024-05-15 02:45:13.079447] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:09.878 test case1: single bdev can't be used in multiple subsystems 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:09.878 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.879 [2024-05-15 02:45:13.103149] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:09.879 [2024-05-15 02:45:13.103176] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:09.879 [2024-05-15 02:45:13.103190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.879 request: 00:19:09.879 { 00:19:09.879 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:09.879 "namespace": { 00:19:09.879 "bdev_name": "Malloc0", 00:19:09.879 "no_auto_visible": false 00:19:09.879 }, 00:19:09.879 "method": "nvmf_subsystem_add_ns", 00:19:09.879 "req_id": 1 00:19:09.879 } 00:19:09.879 Got JSON-RPC error response 00:19:09.879 response: 00:19:09.879 { 00:19:09.879 "code": -32602, 00:19:09.879 "message": "Invalid parameters" 00:19:09.879 } 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:09.879 Adding namespace failed - expected result. 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:09.879 test case2: host connect to nvmf target in multiple paths 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.879 [2024-05-15 02:45:13.119254] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.879 02:45:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:10.809 02:45:14 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:19:12.178 02:45:15 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:12.178 02:45:15 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1195 -- # local i=0 00:19:12.178 02:45:15 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:19:12.178 02:45:15 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:19:12.178 02:45:15 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1202 -- # sleep 2 00:19:14.075 02:45:17 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:19:14.076 02:45:17 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:19:14.076 02:45:17 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:19:14.076 02:45:17 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:19:14.076 02:45:17 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:19:14.076 02:45:17 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1205 -- # return 0 00:19:14.076 02:45:17 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:14.076 [global] 00:19:14.076 thread=1 00:19:14.076 invalidate=1 00:19:14.076 rw=write 00:19:14.076 time_based=1 00:19:14.076 runtime=1 00:19:14.076 ioengine=libaio 00:19:14.076 direct=1 00:19:14.076 bs=4096 00:19:14.076 iodepth=1 00:19:14.076 norandommap=0 00:19:14.076 numjobs=1 00:19:14.076 00:19:14.076 verify_dump=1 00:19:14.076 verify_backlog=512 00:19:14.076 verify_state_save=0 00:19:14.076 do_verify=1 00:19:14.076 verify=crc32c-intel 00:19:14.076 [job0] 00:19:14.076 filename=/dev/nvme0n1 00:19:14.076 Could not set queue depth (nvme0n1) 00:19:14.333 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.333 fio-3.35 00:19:14.333 Starting 1 thread 00:19:15.708 00:19:15.708 job0: (groupid=0, jobs=1): err= 0: pid=823386: Wed May 15 02:45:18 2024 00:19:15.708 read: IOPS=6656, BW=26.0MiB/s (27.3MB/s)(26.0MiB/1000msec) 00:19:15.708 slat (nsec): min=8485, max=50904, avg=9000.52, stdev=1254.23 00:19:15.708 clat (usec): min=46, max=1150, avg=63.93, stdev=14.94 00:19:15.708 lat (usec): min=61, max=1159, avg=72.93, stdev=15.04 00:19:15.708 clat percentiles (usec): 00:19:15.708 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 60], 00:19:15.708 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:19:15.708 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 71], 95.00th=[ 76], 00:19:15.708 | 99.00th=[ 84], 99.50th=[ 87], 99.90th=[ 147], 99.95th=[ 159], 00:19:15.708 | 99.99th=[ 1156] 00:19:15.708 write: IOPS=6845, BW=26.7MiB/s (28.0MB/s)(26.7MiB/1000msec); 0 zone resets 00:19:15.708 slat (nsec): min=10432, max=53410, avg=11078.08, stdev=1294.41 00:19:15.708 clat (usec): min=47, max=367, avg=60.17, stdev= 6.87 00:19:15.708 lat (usec): min=60, max=378, avg=71.25, stdev= 7.06 00:19:15.708 clat percentiles (usec): 00:19:15.708 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:19:15.708 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:19:15.708 | 70.00th=[ 62], 80.00th=[ 64], 90.00th=[ 67], 95.00th=[ 70], 00:19:15.708 | 99.00th=[ 78], 99.50th=[ 81], 99.90th=[ 119], 99.95th=[ 163], 00:19:15.708 | 99.99th=[ 367] 00:19:15.708 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:19:15.708 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:19:15.708 lat (usec) : 50=0.08%, 100=99.73%, 250=0.17%, 500=0.01% 00:19:15.708 lat (msec) : 2=0.01% 00:19:15.708 cpu : usr=7.90%, sys=13.80%, ctx=13502, majf=0, minf=1 00:19:15.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.708 issued rwts: total=6656,6845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.708 00:19:15.708 Run status group 0 (all jobs): 00:19:15.708 READ: bw=26.0MiB/s (27.3MB/s), 26.0MiB/s-26.0MiB/s (27.3MB/s-27.3MB/s), io=26.0MiB (27.3MB), run=1000-1000msec 00:19:15.708 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=26.7MiB (28.0MB), run=1000-1000msec 00:19:15.708 00:19:15.708 Disk stats (read/write): 00:19:15.709 nvme0n1: ios=6169/6144, merge=0/0, ticks=352/319, in_queue=671, util=90.78% 00:19:15.709 02:45:18 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:17.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1216 -- # local i=0 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1228 -- # return 0 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:17.608 rmmod nvme_rdma 00:19:17.608 rmmod nvme_fabrics 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 822544 ']' 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 822544 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' -z 822544 ']' 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@951 -- # kill -0 822544 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # uname 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 822544 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@965 -- # echo 'killing process with pid 822544' 00:19:17.608 killing process with pid 822544 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@966 -- # kill 822544 00:19:17.608 [2024-05-15 02:45:20.670428] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:17.608 02:45:20 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@971 -- # wait 822544 00:19:17.608 [2024-05-15 02:45:20.782620] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:19:17.868 02:45:21 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:17.868 02:45:21 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:17.868 00:19:17.868 real 0m15.373s 00:19:17.868 user 0m37.887s 00:19:17.868 sys 0m6.126s 00:19:17.868 02:45:21 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:17.868 02:45:21 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:17.868 ************************************ 00:19:17.868 END TEST nvmf_nmic 00:19:17.868 ************************************ 00:19:17.868 02:45:21 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:17.868 02:45:21 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:17.868 02:45:21 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:17.868 02:45:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:17.868 ************************************ 00:19:17.868 START TEST nvmf_fio_target 00:19:17.868 ************************************ 00:19:17.868 02:45:21 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:18.128 * Looking for test storage... 00:19:18.128 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:18.128 02:45:21 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:24.699 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:24.699 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:24.699 Found net devices under 0000:18:00.0: mlx_0_0 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.699 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:24.699 Found net devices under 0000:18:00.1: mlx_0_1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:24.700 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:24.700 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:19:24.700 altname enp24s0f0np0 00:19:24.700 altname ens785f0np0 00:19:24.700 inet 192.168.100.8/24 scope global mlx_0_0 00:19:24.700 valid_lft forever preferred_lft forever 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:24.700 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:24.700 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:19:24.700 altname enp24s0f1np1 00:19:24.700 altname ens785f1np1 00:19:24.700 inet 192.168.100.9/24 scope global mlx_0_1 00:19:24.700 valid_lft forever preferred_lft forever 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:24.700 192.168.100.9' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:24.700 192.168.100.9' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:24.700 192.168.100.9' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=826655 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 826655 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@828 -- # '[' -z 826655 ']' 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:24.700 02:45:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.700 [2024-05-15 02:45:27.702905] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:19:24.700 [2024-05-15 02:45:27.702970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.701 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.701 [2024-05-15 02:45:27.797086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.701 [2024-05-15 02:45:27.850487] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.701 [2024-05-15 02:45:27.850539] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.701 [2024-05-15 02:45:27.850554] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.701 [2024-05-15 02:45:27.850567] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.701 [2024-05-15 02:45:27.850578] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.701 [2024-05-15 02:45:27.850641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.701 [2024-05-15 02:45:27.850737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.701 [2024-05-15 02:45:27.850827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.701 [2024-05-15 02:45:27.850827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.701 02:45:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:24.701 02:45:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@861 -- # return 0 00:19:24.701 02:45:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:24.701 02:45:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:24.701 02:45:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.960 02:45:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.960 02:45:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:25.220 [2024-05-15 02:45:28.274461] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23c6d70/0x23cb260) succeed. 00:19:25.220 [2024-05-15 02:45:28.289353] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23c83b0/0x240c8f0) succeed. 00:19:25.220 02:45:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:25.478 02:45:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:25.478 02:45:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:25.736 02:45:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:25.736 02:45:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:25.994 02:45:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:25.994 02:45:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:26.252 02:45:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:26.252 02:45:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:26.510 02:45:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:26.768 02:45:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:26.768 02:45:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:27.026 02:45:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:27.026 02:45:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:27.285 02:45:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:27.285 02:45:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:27.543 02:45:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:27.801 02:45:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:27.801 02:45:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:28.059 02:45:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:28.059 02:45:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:28.318 02:45:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:28.576 [2024-05-15 02:45:31.795989] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:28.576 [2024-05-15 02:45:31.796364] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:28.576 02:45:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:28.834 02:45:32 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:29.091 02:45:32 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:30.120 02:45:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:30.120 02:45:33 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local i=0 00:19:30.120 02:45:33 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:19:30.120 02:45:33 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1197 -- # [[ -n 4 ]] 00:19:30.120 02:45:33 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1198 -- # nvme_device_counter=4 00:19:30.120 02:45:33 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1202 -- # sleep 2 00:19:32.026 02:45:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:19:32.026 02:45:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:19:32.026 02:45:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:19:32.286 02:45:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_devices=4 00:19:32.286 02:45:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:19:32.286 02:45:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1205 -- # return 0 00:19:32.286 02:45:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:32.286 [global] 00:19:32.286 thread=1 00:19:32.286 invalidate=1 00:19:32.286 rw=write 00:19:32.286 time_based=1 00:19:32.286 runtime=1 00:19:32.286 ioengine=libaio 00:19:32.286 direct=1 00:19:32.286 bs=4096 00:19:32.286 iodepth=1 00:19:32.286 norandommap=0 00:19:32.286 numjobs=1 00:19:32.286 00:19:32.286 verify_dump=1 00:19:32.286 verify_backlog=512 00:19:32.286 verify_state_save=0 00:19:32.286 do_verify=1 00:19:32.286 verify=crc32c-intel 00:19:32.286 [job0] 00:19:32.286 filename=/dev/nvme0n1 00:19:32.286 [job1] 00:19:32.286 filename=/dev/nvme0n2 00:19:32.286 [job2] 00:19:32.286 filename=/dev/nvme0n3 00:19:32.286 [job3] 00:19:32.286 filename=/dev/nvme0n4 00:19:32.286 Could not set queue depth (nvme0n1) 00:19:32.286 Could not set queue depth (nvme0n2) 00:19:32.286 Could not set queue depth (nvme0n3) 00:19:32.286 Could not set queue depth (nvme0n4) 00:19:32.545 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:32.545 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:32.545 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:32.545 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:32.545 fio-3.35 00:19:32.545 Starting 4 threads 00:19:33.923 00:19:33.923 job0: (groupid=0, jobs=1): err= 0: pid=827915: Wed May 15 02:45:36 2024 00:19:33.923 read: IOPS=2135, BW=8543KiB/s (8748kB/s)(8552KiB/1001msec) 00:19:33.923 slat (nsec): min=8545, max=28586, avg=10554.33, stdev=1753.77 00:19:33.923 clat (usec): min=94, max=312, avg=203.73, stdev=32.71 00:19:33.923 lat (usec): min=104, max=323, avg=214.29, stdev=32.79 00:19:33.923 clat percentiles (usec): 00:19:33.923 | 1.00th=[ 118], 5.00th=[ 139], 10.00th=[ 165], 20.00th=[ 190], 00:19:33.923 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:19:33.923 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 239], 95.00th=[ 265], 00:19:33.923 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 310], 99.95th=[ 314], 00:19:33.923 | 99.99th=[ 314] 00:19:33.923 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:19:33.923 slat (nsec): min=10224, max=49529, avg=12296.90, stdev=1936.59 00:19:33.923 clat (usec): min=88, max=320, avg=195.01, stdev=35.51 00:19:33.923 lat (usec): min=100, max=340, avg=207.31, stdev=35.52 00:19:33.923 clat percentiles (usec): 00:19:33.923 | 1.00th=[ 110], 5.00th=[ 130], 10.00th=[ 153], 20.00th=[ 178], 00:19:33.923 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:19:33.923 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 262], 00:19:33.923 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 318], 99.95th=[ 322], 00:19:33.923 | 99.99th=[ 322] 00:19:33.923 bw ( KiB/s): min=11336, max=11336, per=25.22%, avg=11336.00, stdev= 0.00, samples=1 00:19:33.923 iops : min= 2834, max= 2834, avg=2834.00, stdev= 0.00, samples=1 00:19:33.923 lat (usec) : 100=0.15%, 250=92.66%, 500=7.19% 00:19:33.923 cpu : usr=3.50%, sys=5.30%, ctx=4699, majf=0, minf=1 00:19:33.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.923 issued rwts: total=2138,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.923 job1: (groupid=0, jobs=1): err= 0: pid=827916: Wed May 15 02:45:36 2024 00:19:33.923 read: IOPS=2158, BW=8635KiB/s (8843kB/s)(8644KiB/1001msec) 00:19:33.923 slat (nsec): min=8667, max=37239, avg=10484.38, stdev=1674.13 00:19:33.923 clat (usec): min=109, max=350, avg=204.16, stdev=32.62 00:19:33.923 lat (usec): min=120, max=359, avg=214.65, stdev=32.67 00:19:33.923 clat percentiles (usec): 00:19:33.923 | 1.00th=[ 119], 5.00th=[ 143], 10.00th=[ 167], 20.00th=[ 190], 00:19:33.923 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:19:33.923 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 262], 00:19:33.923 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 326], 99.95th=[ 334], 00:19:33.923 | 99.99th=[ 351] 00:19:33.923 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:19:33.923 slat (nsec): min=10385, max=88269, avg=12133.67, stdev=2353.35 00:19:33.923 clat (usec): min=94, max=324, avg=193.27, stdev=34.35 00:19:33.923 lat (usec): min=107, max=335, avg=205.41, stdev=34.44 00:19:33.923 clat percentiles (usec): 00:19:33.923 | 1.00th=[ 113], 5.00th=[ 133], 10.00th=[ 153], 20.00th=[ 176], 00:19:33.923 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:19:33.923 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 260], 00:19:33.923 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 322], 99.95th=[ 322], 00:19:33.923 | 99.99th=[ 326] 00:19:33.923 bw ( KiB/s): min=11464, max=11464, per=25.50%, avg=11464.00, stdev= 0.00, samples=1 00:19:33.923 iops : min= 2866, max= 2866, avg=2866.00, stdev= 0.00, samples=1 00:19:33.923 lat (usec) : 100=0.04%, 250=93.90%, 500=6.06% 00:19:33.923 cpu : usr=2.60%, sys=6.10%, ctx=4722, majf=0, minf=1 00:19:33.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.923 issued rwts: total=2161,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.923 job2: (groupid=0, jobs=1): err= 0: pid=827919: Wed May 15 02:45:36 2024 00:19:33.923 read: IOPS=2417, BW=9670KiB/s (9902kB/s)(9680KiB/1001msec) 00:19:33.923 slat (nsec): min=8654, max=42479, avg=9522.81, stdev=1382.59 00:19:33.923 clat (usec): min=103, max=439, avg=191.99, stdev=48.80 00:19:33.923 lat (usec): min=112, max=448, avg=201.51, stdev=48.84 00:19:33.923 clat percentiles (usec): 00:19:33.923 | 1.00th=[ 110], 5.00th=[ 120], 10.00th=[ 127], 20.00th=[ 137], 00:19:33.923 | 30.00th=[ 176], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 204], 00:19:33.923 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 258], 95.00th=[ 285], 00:19:33.923 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 388], 99.95th=[ 429], 00:19:33.923 | 99.99th=[ 441] 00:19:33.923 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:19:33.923 slat (nsec): min=10728, max=55705, avg=11690.92, stdev=1639.85 00:19:33.923 clat (usec): min=95, max=417, avg=184.41, stdev=45.05 00:19:33.923 lat (usec): min=106, max=428, avg=196.10, stdev=45.09 00:19:33.923 clat percentiles (usec): 00:19:33.923 | 1.00th=[ 106], 5.00th=[ 117], 10.00th=[ 123], 20.00th=[ 133], 00:19:33.923 | 30.00th=[ 165], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 196], 00:19:33.923 | 70.00th=[ 204], 80.00th=[ 219], 90.00th=[ 237], 95.00th=[ 260], 00:19:33.923 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 396], 99.95th=[ 396], 00:19:33.923 | 99.99th=[ 416] 00:19:33.923 bw ( KiB/s): min=12288, max=12288, per=27.34%, avg=12288.00, stdev= 0.00, samples=1 00:19:33.923 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:19:33.923 lat (usec) : 100=0.04%, 250=91.57%, 500=8.39% 00:19:33.923 cpu : usr=3.00%, sys=5.60%, ctx=4980, majf=0, minf=1 00:19:33.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.923 issued rwts: total=2420,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.923 job3: (groupid=0, jobs=1): err= 0: pid=827920: Wed May 15 02:45:36 2024 00:19:33.923 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:19:33.923 slat (nsec): min=8654, max=21486, avg=9450.98, stdev=1022.62 00:19:33.923 clat (usec): min=72, max=291, avg=142.98, stdev=59.35 00:19:33.923 lat (usec): min=81, max=301, avg=152.43, stdev=59.63 00:19:33.923 clat percentiles (usec): 00:19:33.923 | 1.00th=[ 78], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 87], 00:19:33.923 | 30.00th=[ 90], 40.00th=[ 93], 50.00th=[ 112], 60.00th=[ 188], 00:19:33.923 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 237], 00:19:33.923 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 285], 00:19:33.923 | 99.99th=[ 293] 00:19:33.923 write: IOPS=3565, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1001msec); 0 zone resets 00:19:33.923 slat (nsec): min=10572, max=48214, avg=11629.22, stdev=1467.13 00:19:33.923 clat (usec): min=67, max=276, avg=133.53, stdev=55.28 00:19:33.923 lat (usec): min=78, max=287, avg=145.16, stdev=55.61 00:19:33.923 clat percentiles (usec): 00:19:33.923 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 83], 00:19:33.923 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 172], 00:19:33.923 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 225], 00:19:33.923 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 269], 99.95th=[ 273], 00:19:33.923 | 99.99th=[ 277] 00:19:33.923 bw ( KiB/s): min=11456, max=11456, per=25.49%, avg=11456.00, stdev= 0.00, samples=1 00:19:33.923 iops : min= 2864, max= 2864, avg=2864.00, stdev= 0.00, samples=1 00:19:33.923 lat (usec) : 100=50.14%, 250=46.86%, 500=3.00% 00:19:33.923 cpu : usr=3.50%, sys=7.80%, ctx=6641, majf=0, minf=1 00:19:33.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.923 issued rwts: total=3072,3569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.923 00:19:33.923 Run status group 0 (all jobs): 00:19:33.923 READ: bw=38.2MiB/s (40.1MB/s), 8543KiB/s-12.0MiB/s (8748kB/s-12.6MB/s), io=38.2MiB (40.1MB), run=1001-1001msec 00:19:33.923 WRITE: bw=43.9MiB/s (46.0MB/s), 9.99MiB/s-13.9MiB/s (10.5MB/s-14.6MB/s), io=43.9MiB (46.1MB), run=1001-1001msec 00:19:33.923 00:19:33.923 Disk stats (read/write): 00:19:33.923 nvme0n1: ios=1816/2048, merge=0/0, ticks=339/385, in_queue=724, util=80.96% 00:19:33.923 nvme0n2: ios=1792/2048, merge=0/0, ticks=345/372, in_queue=717, util=82.58% 00:19:33.923 nvme0n3: ios=2048/2115, merge=0/0, ticks=368/363, in_queue=731, util=87.35% 00:19:33.923 nvme0n4: ios=2051/2560, merge=0/0, ticks=339/379, in_queue=718, util=89.21% 00:19:33.923 02:45:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:33.923 [global] 00:19:33.923 thread=1 00:19:33.923 invalidate=1 00:19:33.923 rw=randwrite 00:19:33.923 time_based=1 00:19:33.923 runtime=1 00:19:33.923 ioengine=libaio 00:19:33.923 direct=1 00:19:33.923 bs=4096 00:19:33.923 iodepth=1 00:19:33.923 norandommap=0 00:19:33.923 numjobs=1 00:19:33.923 00:19:33.923 verify_dump=1 00:19:33.923 verify_backlog=512 00:19:33.923 verify_state_save=0 00:19:33.923 do_verify=1 00:19:33.923 verify=crc32c-intel 00:19:33.923 [job0] 00:19:33.923 filename=/dev/nvme0n1 00:19:33.923 [job1] 00:19:33.923 filename=/dev/nvme0n2 00:19:33.923 [job2] 00:19:33.923 filename=/dev/nvme0n3 00:19:33.923 [job3] 00:19:33.923 filename=/dev/nvme0n4 00:19:33.923 Could not set queue depth (nvme0n1) 00:19:33.923 Could not set queue depth (nvme0n2) 00:19:33.923 Could not set queue depth (nvme0n3) 00:19:33.923 Could not set queue depth (nvme0n4) 00:19:34.182 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:34.182 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:34.182 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:34.182 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:34.182 fio-3.35 00:19:34.182 Starting 4 threads 00:19:35.568 00:19:35.568 job0: (groupid=0, jobs=1): err= 0: pid=828215: Wed May 15 02:45:38 2024 00:19:35.568 read: IOPS=3577, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:19:35.568 slat (nsec): min=8385, max=28432, avg=9166.59, stdev=1054.15 00:19:35.568 clat (usec): min=86, max=438, avg=129.77, stdev=35.10 00:19:35.568 lat (usec): min=95, max=447, avg=138.94, stdev=35.09 00:19:35.568 clat percentiles (usec): 00:19:35.568 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 100], 00:19:35.568 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 111], 60.00th=[ 147], 00:19:35.568 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 196], 00:19:35.568 | 99.00th=[ 223], 99.50th=[ 239], 99.90th=[ 277], 99.95th=[ 314], 00:19:35.568 | 99.99th=[ 441] 00:19:35.568 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:19:35.568 slat (nsec): min=10179, max=77453, avg=11019.31, stdev=1595.72 00:19:35.568 clat (usec): min=82, max=348, avg=125.15, stdev=36.09 00:19:35.568 lat (usec): min=93, max=359, avg=136.17, stdev=36.21 00:19:35.568 clat percentiles (usec): 00:19:35.568 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 95], 00:19:35.568 | 30.00th=[ 98], 40.00th=[ 101], 50.00th=[ 105], 60.00th=[ 123], 00:19:35.568 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 198], 00:19:35.568 | 99.00th=[ 217], 99.50th=[ 231], 99.90th=[ 265], 99.95th=[ 322], 00:19:35.568 | 99.99th=[ 351] 00:19:35.568 bw ( KiB/s): min=17712, max=17712, per=30.47%, avg=17712.00, stdev= 0.00, samples=1 00:19:35.568 iops : min= 4428, max= 4428, avg=4428.00, stdev= 0.00, samples=1 00:19:35.568 lat (usec) : 100=29.53%, 250=70.17%, 500=0.29% 00:19:35.568 cpu : usr=4.40%, sys=7.30%, ctx=7166, majf=0, minf=1 00:19:35.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:35.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.568 issued rwts: total=3581,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:35.568 job1: (groupid=0, jobs=1): err= 0: pid=828218: Wed May 15 02:45:38 2024 00:19:35.568 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:19:35.568 slat (nsec): min=7210, max=33853, avg=9045.58, stdev=1061.47 00:19:35.568 clat (usec): min=72, max=462, avg=127.14, stdev=20.75 00:19:35.568 lat (usec): min=81, max=471, avg=136.18, stdev=20.77 00:19:35.568 clat percentiles (usec): 00:19:35.568 | 1.00th=[ 83], 5.00th=[ 100], 10.00th=[ 110], 20.00th=[ 116], 00:19:35.568 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 129], 00:19:35.568 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 161], 00:19:35.568 | 99.00th=[ 212], 99.50th=[ 237], 99.90th=[ 255], 99.95th=[ 314], 00:19:35.568 | 99.99th=[ 465] 00:19:35.568 write: IOPS=3613, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec); 0 zone resets 00:19:35.568 slat (nsec): min=5180, max=48742, avg=11053.22, stdev=1389.87 00:19:35.568 clat (usec): min=69, max=262, avg=126.50, stdev=22.93 00:19:35.568 lat (usec): min=80, max=273, avg=137.55, stdev=22.91 00:19:35.568 clat percentiles (usec): 00:19:35.568 | 1.00th=[ 80], 5.00th=[ 95], 10.00th=[ 106], 20.00th=[ 113], 00:19:35.568 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 127], 00:19:35.568 | 70.00th=[ 131], 80.00th=[ 137], 90.00th=[ 157], 95.00th=[ 167], 00:19:35.568 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 247], 99.95th=[ 253], 00:19:35.568 | 99.99th=[ 265] 00:19:35.568 bw ( KiB/s): min=14712, max=14712, per=25.31%, avg=14712.00, stdev= 0.00, samples=1 00:19:35.568 iops : min= 3678, max= 3678, avg=3678.00, stdev= 0.00, samples=1 00:19:35.568 lat (usec) : 100=5.50%, 250=94.40%, 500=0.10% 00:19:35.568 cpu : usr=4.00%, sys=7.90%, ctx=7202, majf=0, minf=1 00:19:35.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:35.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.568 issued rwts: total=3584,3617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:35.568 job2: (groupid=0, jobs=1): err= 0: pid=828219: Wed May 15 02:45:38 2024 00:19:35.568 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:19:35.568 slat (nsec): min=8549, max=27219, avg=9336.76, stdev=1160.93 00:19:35.568 clat (usec): min=99, max=399, avg=144.96, stdev=26.94 00:19:35.568 lat (usec): min=109, max=408, avg=154.30, stdev=26.95 00:19:35.568 clat percentiles (usec): 00:19:35.568 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 118], 00:19:35.568 | 30.00th=[ 123], 40.00th=[ 130], 50.00th=[ 153], 60.00th=[ 159], 00:19:35.568 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 182], 00:19:35.568 | 99.00th=[ 210], 99.50th=[ 231], 99.90th=[ 293], 99.95th=[ 379], 00:19:35.568 | 99.99th=[ 400] 00:19:35.568 write: IOPS=3317, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1001msec); 0 zone resets 00:19:35.568 slat (nsec): min=10237, max=43515, avg=11359.78, stdev=1569.49 00:19:35.568 clat (usec): min=90, max=395, avg=143.15, stdev=26.12 00:19:35.568 lat (usec): min=101, max=407, avg=154.51, stdev=26.17 00:19:35.568 clat percentiles (usec): 00:19:35.568 | 1.00th=[ 100], 5.00th=[ 105], 10.00th=[ 109], 20.00th=[ 115], 00:19:35.568 | 30.00th=[ 122], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 155], 00:19:35.568 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 167], 95.00th=[ 178], 00:19:35.568 | 99.00th=[ 208], 99.50th=[ 225], 99.90th=[ 285], 99.95th=[ 310], 00:19:35.568 | 99.99th=[ 396] 00:19:35.568 bw ( KiB/s): min=15696, max=15696, per=27.00%, avg=15696.00, stdev= 0.00, samples=1 00:19:35.568 iops : min= 3924, max= 3924, avg=3924.00, stdev= 0.00, samples=1 00:19:35.568 lat (usec) : 100=0.52%, 250=99.19%, 500=0.30% 00:19:35.568 cpu : usr=3.40%, sys=7.40%, ctx=6393, majf=0, minf=1 00:19:35.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:35.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.568 issued rwts: total=3072,3321,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:35.568 job3: (groupid=0, jobs=1): err= 0: pid=828220: Wed May 15 02:45:38 2024 00:19:35.568 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:19:35.568 slat (nsec): min=4201, max=32092, avg=9088.13, stdev=1264.05 00:19:35.568 clat (usec): min=78, max=255, avg=119.29, stdev=14.84 00:19:35.568 lat (usec): min=87, max=265, avg=128.38, stdev=14.87 00:19:35.568 clat percentiles (usec): 00:19:35.568 | 1.00th=[ 88], 5.00th=[ 93], 10.00th=[ 97], 20.00th=[ 108], 00:19:35.568 | 30.00th=[ 115], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 124], 00:19:35.568 | 70.00th=[ 127], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 141], 00:19:35.568 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 186], 00:19:35.568 | 99.99th=[ 255] 00:19:35.568 write: IOPS=4019, BW=15.7MiB/s (16.5MB/s)(15.7MiB/1001msec); 0 zone resets 00:19:35.568 slat (nsec): min=6487, max=44702, avg=11389.98, stdev=1351.65 00:19:35.568 clat (usec): min=76, max=368, avg=119.01, stdev=15.04 00:19:35.568 lat (usec): min=86, max=389, avg=130.40, stdev=15.20 00:19:35.568 clat percentiles (usec): 00:19:35.568 | 1.00th=[ 84], 5.00th=[ 90], 10.00th=[ 96], 20.00th=[ 110], 00:19:35.568 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 124], 00:19:35.568 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 141], 00:19:35.568 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 188], 00:19:35.568 | 99.99th=[ 367] 00:19:35.568 bw ( KiB/s): min=16384, max=16384, per=28.19%, avg=16384.00, stdev= 0.00, samples=1 00:19:35.568 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:35.568 lat (usec) : 100=13.12%, 250=86.86%, 500=0.03% 00:19:35.568 cpu : usr=4.50%, sys=8.10%, ctx=7608, majf=0, minf=1 00:19:35.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:35.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:35.568 issued rwts: total=3584,4024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:35.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:35.568 00:19:35.568 Run status group 0 (all jobs): 00:19:35.568 READ: bw=53.9MiB/s (56.6MB/s), 12.0MiB/s-14.0MiB/s (12.6MB/s-14.7MB/s), io=54.0MiB (56.6MB), run=1001-1001msec 00:19:35.568 WRITE: bw=56.8MiB/s (59.5MB/s), 13.0MiB/s-15.7MiB/s (13.6MB/s-16.5MB/s), io=56.8MiB (59.6MB), run=1001-1001msec 00:19:35.568 00:19:35.568 Disk stats (read/write): 00:19:35.568 nvme0n1: ios=3122/3104, merge=0/0, ticks=373/356, in_queue=729, util=84.47% 00:19:35.568 nvme0n2: ios=2850/3072, merge=0/0, ticks=342/372, in_queue=714, util=84.88% 00:19:35.568 nvme0n3: ios=2560/2846, merge=0/0, ticks=345/375, in_queue=720, util=88.22% 00:19:35.568 nvme0n4: ios=3072/3257, merge=0/0, ticks=339/374, in_queue=713, util=89.45% 00:19:35.569 02:45:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:35.569 [global] 00:19:35.569 thread=1 00:19:35.569 invalidate=1 00:19:35.569 rw=write 00:19:35.569 time_based=1 00:19:35.569 runtime=1 00:19:35.569 ioengine=libaio 00:19:35.569 direct=1 00:19:35.569 bs=4096 00:19:35.569 iodepth=128 00:19:35.569 norandommap=0 00:19:35.569 numjobs=1 00:19:35.569 00:19:35.569 verify_dump=1 00:19:35.569 verify_backlog=512 00:19:35.569 verify_state_save=0 00:19:35.569 do_verify=1 00:19:35.569 verify=crc32c-intel 00:19:35.569 [job0] 00:19:35.569 filename=/dev/nvme0n1 00:19:35.569 [job1] 00:19:35.569 filename=/dev/nvme0n2 00:19:35.569 [job2] 00:19:35.569 filename=/dev/nvme0n3 00:19:35.569 [job3] 00:19:35.569 filename=/dev/nvme0n4 00:19:35.569 Could not set queue depth (nvme0n1) 00:19:35.569 Could not set queue depth (nvme0n2) 00:19:35.569 Could not set queue depth (nvme0n3) 00:19:35.569 Could not set queue depth (nvme0n4) 00:19:35.827 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:35.827 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:35.827 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:35.827 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:35.827 fio-3.35 00:19:35.827 Starting 4 threads 00:19:37.200 00:19:37.200 job0: (groupid=0, jobs=1): err= 0: pid=828520: Wed May 15 02:45:40 2024 00:19:37.200 read: IOPS=5908, BW=23.1MiB/s (24.2MB/s)(23.2MiB/1005msec) 00:19:37.200 slat (usec): min=3, max=5875, avg=83.54, stdev=383.81 00:19:37.200 clat (usec): min=2659, max=20315, avg=10850.68, stdev=2925.71 00:19:37.200 lat (usec): min=3081, max=20321, avg=10934.23, stdev=2938.41 00:19:37.200 clat percentiles (usec): 00:19:37.200 | 1.00th=[ 4178], 5.00th=[ 6521], 10.00th=[ 7439], 20.00th=[ 8586], 00:19:37.200 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11207], 00:19:37.200 | 70.00th=[11994], 80.00th=[13173], 90.00th=[14746], 95.00th=[15926], 00:19:37.200 | 99.00th=[20055], 99.50th=[20317], 99.90th=[20317], 99.95th=[20317], 00:19:37.200 | 99.99th=[20317] 00:19:37.200 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:19:37.200 slat (usec): min=3, max=6524, avg=76.92, stdev=357.78 00:19:37.200 clat (usec): min=2953, max=22751, avg=10217.33, stdev=3245.93 00:19:37.200 lat (usec): min=3266, max=22756, avg=10294.25, stdev=3261.61 00:19:37.200 clat percentiles (usec): 00:19:37.200 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6521], 20.00th=[ 7898], 00:19:37.200 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10290], 00:19:37.200 | 70.00th=[11207], 80.00th=[12649], 90.00th=[14877], 95.00th=[16450], 00:19:37.200 | 99.00th=[20579], 99.50th=[20579], 99.90th=[20841], 99.95th=[22152], 00:19:37.200 | 99.99th=[22676] 00:19:37.200 bw ( KiB/s): min=24576, max=24576, per=28.22%, avg=24576.00, stdev= 0.00, samples=2 00:19:37.200 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:19:37.200 lat (msec) : 4=0.42%, 10=47.84%, 20=50.49%, 50=1.25% 00:19:37.200 cpu : usr=4.38%, sys=5.98%, ctx=917, majf=0, minf=1 00:19:37.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:37.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:37.200 issued rwts: total=5938,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:37.200 job1: (groupid=0, jobs=1): err= 0: pid=828521: Wed May 15 02:45:40 2024 00:19:37.200 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:19:37.200 slat (usec): min=3, max=5052, avg=75.25, stdev=326.72 00:19:37.200 clat (usec): min=3792, max=20476, avg=9733.34, stdev=2295.52 00:19:37.200 lat (usec): min=3862, max=20481, avg=9808.59, stdev=2306.20 00:19:37.200 clat percentiles (usec): 00:19:37.200 | 1.00th=[ 6325], 5.00th=[ 7308], 10.00th=[ 7832], 20.00th=[ 8225], 00:19:37.200 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:19:37.200 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[12518], 95.00th=[14615], 00:19:37.200 | 99.00th=[18744], 99.50th=[19006], 99.90th=[20579], 99.95th=[20579], 00:19:37.200 | 99.99th=[20579] 00:19:37.200 write: IOPS=6914, BW=27.0MiB/s (28.3MB/s)(27.1MiB/1002msec); 0 zone resets 00:19:37.200 slat (usec): min=3, max=5477, avg=66.68, stdev=266.58 00:19:37.200 clat (usec): min=638, max=22560, avg=8948.23, stdev=2110.98 00:19:37.200 lat (usec): min=1521, max=22570, avg=9014.90, stdev=2117.18 00:19:37.200 clat percentiles (usec): 00:19:37.200 | 1.00th=[ 5342], 5.00th=[ 6718], 10.00th=[ 7439], 20.00th=[ 7963], 00:19:37.200 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:19:37.200 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[10290], 95.00th=[11600], 00:19:37.200 | 99.00th=[19268], 99.50th=[19792], 99.90th=[22414], 99.95th=[22676], 00:19:37.200 | 99.99th=[22676] 00:19:37.200 bw ( KiB/s): min=26512, max=27896, per=31.23%, avg=27204.00, stdev=978.64, samples=2 00:19:37.200 iops : min= 6628, max= 6974, avg=6801.00, stdev=244.66, samples=2 00:19:37.200 lat (usec) : 750=0.01% 00:19:37.200 lat (msec) : 2=0.10%, 4=0.27%, 10=80.76%, 20=18.47%, 50=0.40% 00:19:37.200 cpu : usr=4.40%, sys=7.59%, ctx=999, majf=0, minf=1 00:19:37.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:37.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:37.200 issued rwts: total=6656,6928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:37.200 job2: (groupid=0, jobs=1): err= 0: pid=828522: Wed May 15 02:45:40 2024 00:19:37.200 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:19:37.200 slat (usec): min=3, max=7136, avg=101.38, stdev=450.71 00:19:37.200 clat (usec): min=6061, max=25736, avg=13334.00, stdev=3390.74 00:19:37.200 lat (usec): min=6068, max=25741, avg=13435.38, stdev=3411.85 00:19:37.200 clat percentiles (usec): 00:19:37.200 | 1.00th=[ 7046], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10814], 00:19:37.200 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12649], 60.00th=[13435], 00:19:37.201 | 70.00th=[14746], 80.00th=[16319], 90.00th=[17957], 95.00th=[19530], 00:19:37.201 | 99.00th=[22938], 99.50th=[22938], 99.90th=[23987], 99.95th=[23987], 00:19:37.201 | 99.99th=[25822] 00:19:37.201 write: IOPS=4692, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1005msec); 0 zone resets 00:19:37.201 slat (usec): min=3, max=6198, avg=107.71, stdev=469.30 00:19:37.201 clat (usec): min=3154, max=26017, avg=13887.45, stdev=3631.92 00:19:37.201 lat (usec): min=4393, max=26027, avg=13995.16, stdev=3649.80 00:19:37.201 clat percentiles (usec): 00:19:37.201 | 1.00th=[ 7373], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10552], 00:19:37.201 | 30.00th=[11469], 40.00th=[12387], 50.00th=[13698], 60.00th=[14615], 00:19:37.201 | 70.00th=[15795], 80.00th=[17433], 90.00th=[19268], 95.00th=[20055], 00:19:37.201 | 99.00th=[22676], 99.50th=[22676], 99.90th=[23200], 99.95th=[23987], 00:19:37.201 | 99.99th=[26084] 00:19:37.201 bw ( KiB/s): min=17080, max=19784, per=21.16%, avg=18432.00, stdev=1912.02, samples=2 00:19:37.201 iops : min= 4270, max= 4946, avg=4608.00, stdev=478.00, samples=2 00:19:37.201 lat (msec) : 4=0.01%, 10=14.62%, 20=80.09%, 50=5.28% 00:19:37.201 cpu : usr=3.39%, sys=5.08%, ctx=862, majf=0, minf=1 00:19:37.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:37.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:37.201 issued rwts: total=4608,4716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:37.201 job3: (groupid=0, jobs=1): err= 0: pid=828523: Wed May 15 02:45:40 2024 00:19:37.201 read: IOPS=3679, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1005msec) 00:19:37.201 slat (usec): min=3, max=5516, avg=129.96, stdev=479.77 00:19:37.201 clat (usec): min=3251, max=26378, avg=16727.48, stdev=4042.88 00:19:37.201 lat (usec): min=4622, max=26394, avg=16857.44, stdev=4061.50 00:19:37.201 clat percentiles (usec): 00:19:37.201 | 1.00th=[ 8356], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[12911], 00:19:37.201 | 30.00th=[15139], 40.00th=[16450], 50.00th=[17171], 60.00th=[17957], 00:19:37.201 | 70.00th=[19530], 80.00th=[20317], 90.00th=[21890], 95.00th=[22414], 00:19:37.201 | 99.00th=[24249], 99.50th=[24511], 99.90th=[25297], 99.95th=[25560], 00:19:37.201 | 99.99th=[26346] 00:19:37.201 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:19:37.201 slat (usec): min=4, max=5498, avg=121.52, stdev=464.74 00:19:37.201 clat (usec): min=7241, max=28198, avg=15911.18, stdev=4530.47 00:19:37.201 lat (usec): min=7246, max=28209, avg=16032.70, stdev=4565.61 00:19:37.201 clat percentiles (usec): 00:19:37.201 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[11207], 00:19:37.201 | 30.00th=[12911], 40.00th=[15008], 50.00th=[16712], 60.00th=[17695], 00:19:37.201 | 70.00th=[18744], 80.00th=[20055], 90.00th=[21627], 95.00th=[22938], 00:19:37.201 | 99.00th=[25822], 99.50th=[26608], 99.90th=[27132], 99.95th=[27395], 00:19:37.201 | 99.99th=[28181] 00:19:37.201 bw ( KiB/s): min=16280, max=16384, per=18.75%, avg=16332.00, stdev=73.54, samples=2 00:19:37.201 iops : min= 4070, max= 4096, avg=4083.00, stdev=18.38, samples=2 00:19:37.201 lat (msec) : 4=0.01%, 10=10.59%, 20=67.46%, 50=21.94% 00:19:37.201 cpu : usr=2.09%, sys=5.18%, ctx=811, majf=0, minf=1 00:19:37.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:37.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:37.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:37.201 issued rwts: total=3698,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:37.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:37.201 00:19:37.201 Run status group 0 (all jobs): 00:19:37.201 READ: bw=81.2MiB/s (85.2MB/s), 14.4MiB/s-25.9MiB/s (15.1MB/s-27.2MB/s), io=81.6MiB (85.6MB), run=1002-1005msec 00:19:37.201 WRITE: bw=85.1MiB/s (89.2MB/s), 15.9MiB/s-27.0MiB/s (16.7MB/s-28.3MB/s), io=85.5MiB (89.6MB), run=1002-1005msec 00:19:37.201 00:19:37.201 Disk stats (read/write): 00:19:37.201 nvme0n1: ios=4741/5120, merge=0/0, ticks=19219/19692, in_queue=38911, util=84.17% 00:19:37.201 nvme0n2: ios=5632/5826, merge=0/0, ticks=13546/12712, in_queue=26258, util=84.46% 00:19:37.201 nvme0n3: ios=3636/4096, merge=0/0, ticks=15526/17577, in_queue=33103, util=88.20% 00:19:37.201 nvme0n4: ios=3072/3370, merge=0/0, ticks=15733/16276, in_queue=32009, util=89.44% 00:19:37.201 02:45:40 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:37.201 [global] 00:19:37.201 thread=1 00:19:37.201 invalidate=1 00:19:37.201 rw=randwrite 00:19:37.201 time_based=1 00:19:37.201 runtime=1 00:19:37.201 ioengine=libaio 00:19:37.201 direct=1 00:19:37.201 bs=4096 00:19:37.201 iodepth=128 00:19:37.201 norandommap=0 00:19:37.201 numjobs=1 00:19:37.201 00:19:37.201 verify_dump=1 00:19:37.201 verify_backlog=512 00:19:37.201 verify_state_save=0 00:19:37.201 do_verify=1 00:19:37.201 verify=crc32c-intel 00:19:37.201 [job0] 00:19:37.201 filename=/dev/nvme0n1 00:19:37.201 [job1] 00:19:37.201 filename=/dev/nvme0n2 00:19:37.201 [job2] 00:19:37.201 filename=/dev/nvme0n3 00:19:37.201 [job3] 00:19:37.201 filename=/dev/nvme0n4 00:19:37.201 Could not set queue depth (nvme0n1) 00:19:37.201 Could not set queue depth (nvme0n2) 00:19:37.201 Could not set queue depth (nvme0n3) 00:19:37.201 Could not set queue depth (nvme0n4) 00:19:37.201 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:37.201 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:37.201 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:37.201 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:37.201 fio-3.35 00:19:37.201 Starting 4 threads 00:19:38.577 00:19:38.577 job0: (groupid=0, jobs=1): err= 0: pid=828821: Wed May 15 02:45:41 2024 00:19:38.577 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:19:38.577 slat (nsec): min=1968, max=7864.9k, avg=120292.83, stdev=523789.27 00:19:38.577 clat (usec): min=3094, max=29810, avg=15362.06, stdev=5904.64 00:19:38.577 lat (usec): min=4788, max=29826, avg=15482.36, stdev=5939.18 00:19:38.577 clat percentiles (usec): 00:19:38.577 | 1.00th=[ 6783], 5.00th=[ 7439], 10.00th=[ 8455], 20.00th=[ 9372], 00:19:38.577 | 30.00th=[10552], 40.00th=[11600], 50.00th=[14746], 60.00th=[17171], 00:19:38.577 | 70.00th=[19792], 80.00th=[21890], 90.00th=[23462], 95.00th=[24511], 00:19:38.577 | 99.00th=[26346], 99.50th=[27395], 99.90th=[29754], 99.95th=[29754], 00:19:38.577 | 99.99th=[29754] 00:19:38.577 write: IOPS=4243, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1004msec); 0 zone resets 00:19:38.577 slat (usec): min=2, max=6015, avg=112.69, stdev=472.50 00:19:38.577 clat (usec): min=3152, max=29715, avg=14987.56, stdev=5547.46 00:19:38.577 lat (usec): min=5182, max=29724, avg=15100.25, stdev=5569.07 00:19:38.577 clat percentiles (usec): 00:19:38.577 | 1.00th=[ 6652], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 9372], 00:19:38.577 | 30.00th=[10159], 40.00th=[12125], 50.00th=[14353], 60.00th=[16581], 00:19:38.577 | 70.00th=[19006], 80.00th=[19792], 90.00th=[22676], 95.00th=[24773], 00:19:38.578 | 99.00th=[27132], 99.50th=[28181], 99.90th=[29754], 99.95th=[29754], 00:19:38.578 | 99.99th=[29754] 00:19:38.578 bw ( KiB/s): min=12344, max=20678, per=19.57%, avg=16511.00, stdev=5893.03, samples=2 00:19:38.578 iops : min= 3086, max= 5169, avg=4127.50, stdev=1472.90, samples=2 00:19:38.578 lat (msec) : 4=0.02%, 10=26.33%, 20=49.92%, 50=23.73% 00:19:38.578 cpu : usr=3.29%, sys=5.08%, ctx=822, majf=0, minf=1 00:19:38.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:38.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:38.578 issued rwts: total=4096,4260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:38.578 job1: (groupid=0, jobs=1): err= 0: pid=828822: Wed May 15 02:45:41 2024 00:19:38.578 read: IOPS=5992, BW=23.4MiB/s (24.5MB/s)(23.4MiB/1001msec) 00:19:38.578 slat (usec): min=2, max=6881, avg=82.01, stdev=352.40 00:19:38.578 clat (usec): min=824, max=41895, avg=10685.59, stdev=5088.38 00:19:38.578 lat (usec): min=1896, max=41903, avg=10767.59, stdev=5120.73 00:19:38.578 clat percentiles (usec): 00:19:38.578 | 1.00th=[ 4621], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7635], 00:19:38.578 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10028], 00:19:38.578 | 70.00th=[10552], 80.00th=[11863], 90.00th=[16319], 95.00th=[20841], 00:19:38.578 | 99.00th=[32637], 99.50th=[35390], 99.90th=[40109], 99.95th=[41681], 00:19:38.578 | 99.99th=[41681] 00:19:38.578 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:19:38.578 slat (usec): min=2, max=6366, avg=76.49, stdev=323.64 00:19:38.578 clat (usec): min=3898, max=30289, avg=10140.89, stdev=3506.71 00:19:38.578 lat (usec): min=3978, max=30295, avg=10217.38, stdev=3527.71 00:19:38.578 clat percentiles (usec): 00:19:38.578 | 1.00th=[ 5342], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 8029], 00:19:38.578 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:19:38.578 | 70.00th=[10421], 80.00th=[11600], 90.00th=[14877], 95.00th=[16057], 00:19:38.578 | 99.00th=[25035], 99.50th=[29492], 99.90th=[30278], 99.95th=[30278], 00:19:38.578 | 99.99th=[30278] 00:19:38.578 bw ( KiB/s): min=20480, max=20480, per=24.28%, avg=20480.00, stdev= 0.00, samples=1 00:19:38.578 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:38.578 lat (usec) : 1000=0.01% 00:19:38.578 lat (msec) : 2=0.07%, 4=0.26%, 10=62.50%, 20=33.29%, 50=3.87% 00:19:38.578 cpu : usr=3.70%, sys=8.20%, ctx=1042, majf=0, minf=1 00:19:38.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:38.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:38.578 issued rwts: total=5998,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:38.578 job2: (groupid=0, jobs=1): err= 0: pid=828829: Wed May 15 02:45:41 2024 00:19:38.578 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:19:38.578 slat (usec): min=3, max=6006, avg=94.49, stdev=460.76 00:19:38.578 clat (usec): min=4120, max=25259, avg=12249.76, stdev=3750.84 00:19:38.578 lat (usec): min=4728, max=25274, avg=12344.26, stdev=3762.00 00:19:38.578 clat percentiles (usec): 00:19:38.578 | 1.00th=[ 5735], 5.00th=[ 7177], 10.00th=[ 8160], 20.00th=[ 8979], 00:19:38.578 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11600], 60.00th=[12256], 00:19:38.578 | 70.00th=[13173], 80.00th=[14877], 90.00th=[18744], 95.00th=[19792], 00:19:38.578 | 99.00th=[22938], 99.50th=[23725], 99.90th=[24773], 99.95th=[24773], 00:19:38.578 | 99.99th=[25297] 00:19:38.578 write: IOPS=5396, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1004msec); 0 zone resets 00:19:38.578 slat (usec): min=3, max=7029, avg=88.73, stdev=427.15 00:19:38.578 clat (usec): min=1504, max=24783, avg=11872.55, stdev=3913.82 00:19:38.578 lat (usec): min=1676, max=24789, avg=11961.28, stdev=3918.18 00:19:38.578 clat percentiles (usec): 00:19:38.578 | 1.00th=[ 4424], 5.00th=[ 6128], 10.00th=[ 7504], 20.00th=[ 8717], 00:19:38.578 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11207], 60.00th=[11863], 00:19:38.578 | 70.00th=[12780], 80.00th=[14877], 90.00th=[17695], 95.00th=[19792], 00:19:38.578 | 99.00th=[23725], 99.50th=[24773], 99.90th=[24773], 99.95th=[24773], 00:19:38.578 | 99.99th=[24773] 00:19:38.578 bw ( KiB/s): min=17752, max=24576, per=25.09%, avg=21164.00, stdev=4825.30, samples=2 00:19:38.578 iops : min= 4438, max= 6144, avg=5291.00, stdev=1206.32, samples=2 00:19:38.578 lat (msec) : 2=0.08%, 4=0.10%, 10=27.97%, 20=68.26%, 50=3.60% 00:19:38.578 cpu : usr=3.09%, sys=8.28%, ctx=644, majf=0, minf=1 00:19:38.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:38.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:38.578 issued rwts: total=5120,5418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:38.578 job3: (groupid=0, jobs=1): err= 0: pid=828834: Wed May 15 02:45:41 2024 00:19:38.578 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:19:38.578 slat (usec): min=2, max=8304, avg=92.06, stdev=426.32 00:19:38.578 clat (usec): min=3349, max=26995, avg=12274.32, stdev=4257.18 00:19:38.578 lat (usec): min=3354, max=27004, avg=12366.38, stdev=4273.66 00:19:38.578 clat percentiles (usec): 00:19:38.578 | 1.00th=[ 5211], 5.00th=[ 6456], 10.00th=[ 7570], 20.00th=[ 8455], 00:19:38.578 | 30.00th=[ 9241], 40.00th=[10683], 50.00th=[11863], 60.00th=[12911], 00:19:38.578 | 70.00th=[14484], 80.00th=[15795], 90.00th=[17695], 95.00th=[20317], 00:19:38.578 | 99.00th=[24773], 99.50th=[24773], 99.90th=[26870], 99.95th=[26870], 00:19:38.578 | 99.99th=[26870] 00:19:38.578 write: IOPS=5342, BW=20.9MiB/s (21.9MB/s)(20.9MiB/1002msec); 0 zone resets 00:19:38.578 slat (usec): min=2, max=6234, avg=92.41, stdev=415.05 00:19:38.578 clat (usec): min=970, max=28600, avg=11937.94, stdev=4895.53 00:19:38.578 lat (usec): min=1838, max=29554, avg=12030.35, stdev=4917.29 00:19:38.578 clat percentiles (usec): 00:19:38.578 | 1.00th=[ 4293], 5.00th=[ 6521], 10.00th=[ 7308], 20.00th=[ 8094], 00:19:38.578 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[10421], 60.00th=[12256], 00:19:38.578 | 70.00th=[13698], 80.00th=[15926], 90.00th=[19006], 95.00th=[21890], 00:19:38.578 | 99.00th=[26084], 99.50th=[27395], 99.90th=[28181], 99.95th=[28705], 00:19:38.578 | 99.99th=[28705] 00:19:38.578 bw ( KiB/s): min=20480, max=20480, per=24.28%, avg=20480.00, stdev= 0.00, samples=1 00:19:38.578 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:38.578 lat (usec) : 1000=0.01% 00:19:38.578 lat (msec) : 2=0.03%, 4=0.61%, 10=41.19%, 20=50.79%, 50=7.37% 00:19:38.578 cpu : usr=4.80%, sys=6.29%, ctx=922, majf=0, minf=1 00:19:38.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:38.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:38.578 issued rwts: total=5120,5353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:38.578 00:19:38.578 Run status group 0 (all jobs): 00:19:38.578 READ: bw=79.1MiB/s (83.0MB/s), 15.9MiB/s-23.4MiB/s (16.7MB/s-24.5MB/s), io=79.4MiB (83.3MB), run=1001-1004msec 00:19:38.578 WRITE: bw=82.4MiB/s (86.4MB/s), 16.6MiB/s-24.0MiB/s (17.4MB/s-25.1MB/s), io=82.7MiB (86.7MB), run=1001-1004msec 00:19:38.578 00:19:38.578 Disk stats (read/write): 00:19:38.578 nvme0n1: ios=3634/3626, merge=0/0, ticks=14707/13331, in_queue=28038, util=83.67% 00:19:38.578 nvme0n2: ios=4731/5120, merge=0/0, ticks=13921/13934, in_queue=27855, util=84.36% 00:19:38.578 nvme0n3: ios=4319/4608, merge=0/0, ticks=18815/20888, in_queue=39703, util=87.78% 00:19:38.578 nvme0n4: ios=4096/4356, merge=0/0, ticks=18307/17231, in_queue=35538, util=88.36% 00:19:38.578 02:45:41 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:38.578 02:45:41 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:38.578 02:45:41 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=829008 00:19:38.578 02:45:41 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:38.578 [global] 00:19:38.578 thread=1 00:19:38.578 invalidate=1 00:19:38.578 rw=read 00:19:38.578 time_based=1 00:19:38.578 runtime=10 00:19:38.578 ioengine=libaio 00:19:38.578 direct=1 00:19:38.578 bs=4096 00:19:38.578 iodepth=1 00:19:38.578 norandommap=1 00:19:38.578 numjobs=1 00:19:38.578 00:19:38.578 [job0] 00:19:38.578 filename=/dev/nvme0n1 00:19:38.578 [job1] 00:19:38.578 filename=/dev/nvme0n2 00:19:38.578 [job2] 00:19:38.578 filename=/dev/nvme0n3 00:19:38.578 [job3] 00:19:38.578 filename=/dev/nvme0n4 00:19:38.578 Could not set queue depth (nvme0n1) 00:19:38.578 Could not set queue depth (nvme0n2) 00:19:38.578 Could not set queue depth (nvme0n3) 00:19:38.578 Could not set queue depth (nvme0n4) 00:19:38.837 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:38.837 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:38.837 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:38.837 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:38.837 fio-3.35 00:19:38.837 Starting 4 threads 00:19:42.120 02:45:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:42.120 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=66834432, buflen=4096 00:19:42.120 fio: pid=829225, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:42.120 02:45:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:42.120 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=90869760, buflen=4096 00:19:42.120 fio: pid=829218, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:42.120 02:45:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:42.120 02:45:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:42.378 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=23740416, buflen=4096 00:19:42.378 fio: pid=829179, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:42.378 02:45:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:42.378 02:45:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:42.637 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=37715968, buflen=4096 00:19:42.637 fio: pid=829196, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:42.637 02:45:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:42.637 02:45:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:42.637 00:19:42.637 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=829179: Wed May 15 02:45:45 2024 00:19:42.637 read: IOPS=6944, BW=27.1MiB/s (28.4MB/s)(86.6MiB/3194msec) 00:19:42.637 slat (usec): min=8, max=31535, avg=13.22, stdev=306.66 00:19:42.637 clat (usec): min=65, max=486, avg=129.32, stdev=31.55 00:19:42.637 lat (usec): min=74, max=31655, avg=142.54, stdev=308.07 00:19:42.637 clat percentiles (usec): 00:19:42.637 | 1.00th=[ 78], 5.00th=[ 89], 10.00th=[ 92], 20.00th=[ 96], 00:19:42.637 | 30.00th=[ 100], 40.00th=[ 108], 50.00th=[ 141], 60.00th=[ 147], 00:19:42.637 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 176], 00:19:42.637 | 99.00th=[ 190], 99.50th=[ 198], 99.90th=[ 223], 99.95th=[ 231], 00:19:42.637 | 99.99th=[ 251] 00:19:42.637 bw ( KiB/s): min=22816, max=33496, per=27.47%, avg=27354.67, stdev=4226.51, samples=6 00:19:42.637 iops : min= 5704, max= 8374, avg=6838.67, stdev=1056.63, samples=6 00:19:42.637 lat (usec) : 100=29.82%, 250=70.16%, 500=0.01% 00:19:42.637 cpu : usr=2.13%, sys=8.05%, ctx=22185, majf=0, minf=1 00:19:42.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:42.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.637 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.637 issued rwts: total=22181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:42.637 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=829196: Wed May 15 02:45:45 2024 00:19:42.638 read: IOPS=7384, BW=28.8MiB/s (30.2MB/s)(100.0MiB/3466msec) 00:19:42.638 slat (usec): min=3, max=16410, avg=11.23, stdev=193.27 00:19:42.638 clat (usec): min=44, max=489, avg=122.89, stdev=43.69 00:19:42.638 lat (usec): min=49, max=16482, avg=134.11, stdev=197.85 00:19:42.638 clat percentiles (usec): 00:19:42.638 | 1.00th=[ 50], 5.00th=[ 53], 10.00th=[ 56], 20.00th=[ 74], 00:19:42.638 | 30.00th=[ 87], 40.00th=[ 129], 50.00th=[ 143], 60.00th=[ 149], 00:19:42.638 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 178], 00:19:42.638 | 99.00th=[ 198], 99.50th=[ 206], 99.90th=[ 219], 99.95th=[ 231], 00:19:42.638 | 99.99th=[ 355] 00:19:42.638 bw ( KiB/s): min=22424, max=34072, per=26.33%, avg=26212.00, stdev=4013.92, samples=6 00:19:42.638 iops : min= 5606, max= 8518, avg=6553.00, stdev=1003.48, samples=6 00:19:42.638 lat (usec) : 50=1.53%, 100=35.01%, 250=63.44%, 500=0.01% 00:19:42.638 cpu : usr=2.28%, sys=7.36%, ctx=25599, majf=0, minf=1 00:19:42.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:42.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.638 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.638 issued rwts: total=25593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:42.638 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=829218: Wed May 15 02:45:45 2024 00:19:42.638 read: IOPS=7497, BW=29.3MiB/s (30.7MB/s)(86.7MiB/2959msec) 00:19:42.638 slat (usec): min=8, max=11875, avg=10.27, stdev=98.97 00:19:42.638 clat (usec): min=86, max=405, avg=120.84, stdev=13.74 00:19:42.638 lat (usec): min=95, max=12016, avg=131.11, stdev=100.04 00:19:42.638 clat percentiles (usec): 00:19:42.638 | 1.00th=[ 102], 5.00th=[ 106], 10.00th=[ 109], 20.00th=[ 113], 00:19:42.638 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 123], 00:19:42.638 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 137], 00:19:42.638 | 99.00th=[ 165], 99.50th=[ 210], 99.90th=[ 262], 99.95th=[ 281], 00:19:42.638 | 99.99th=[ 375] 00:19:42.638 bw ( KiB/s): min=29184, max=32104, per=30.67%, avg=30539.20, stdev=1124.30, samples=5 00:19:42.638 iops : min= 7296, max= 8026, avg=7634.80, stdev=281.08, samples=5 00:19:42.638 lat (usec) : 100=0.37%, 250=99.50%, 500=0.13% 00:19:42.638 cpu : usr=2.54%, sys=8.42%, ctx=22188, majf=0, minf=1 00:19:42.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:42.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.638 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.638 issued rwts: total=22186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:42.638 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=829225: Wed May 15 02:45:45 2024 00:19:42.638 read: IOPS=5979, BW=23.4MiB/s (24.5MB/s)(63.7MiB/2729msec) 00:19:42.638 slat (nsec): min=8594, max=38547, avg=9498.48, stdev=1142.69 00:19:42.638 clat (usec): min=80, max=449, avg=155.07, stdev=18.59 00:19:42.638 lat (usec): min=89, max=458, avg=164.57, stdev=18.62 00:19:42.638 clat percentiles (usec): 00:19:42.638 | 1.00th=[ 99], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 143], 00:19:42.638 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:19:42.638 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 180], 95.00th=[ 190], 00:19:42.638 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 235], 99.95th=[ 241], 00:19:42.638 | 99.99th=[ 412] 00:19:42.638 bw ( KiB/s): min=22328, max=24896, per=24.21%, avg=24105.60, stdev=1027.64, samples=5 00:19:42.638 iops : min= 5582, max= 6224, avg=6026.40, stdev=256.91, samples=5 00:19:42.638 lat (usec) : 100=1.13%, 250=98.84%, 500=0.02% 00:19:42.638 cpu : usr=2.13%, sys=6.74%, ctx=16318, majf=0, minf=2 00:19:42.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:42.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.638 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:42.638 issued rwts: total=16318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:42.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:42.638 00:19:42.638 Run status group 0 (all jobs): 00:19:42.638 READ: bw=97.2MiB/s (102MB/s), 23.4MiB/s-29.3MiB/s (24.5MB/s-30.7MB/s), io=337MiB (353MB), run=2729-3466msec 00:19:42.638 00:19:42.638 Disk stats (read/write): 00:19:42.638 nvme0n1: ios=20943/0, merge=0/0, ticks=2649/0, in_queue=2649, util=91.68% 00:19:42.638 nvme0n2: ios=23080/0, merge=0/0, ticks=2852/0, in_queue=2852, util=93.34% 00:19:42.638 nvme0n3: ios=21248/0, merge=0/0, ticks=2485/0, in_queue=2485, util=95.55% 00:19:42.638 nvme0n4: ios=15407/0, merge=0/0, ticks=2314/0, in_queue=2314, util=96.42% 00:19:42.897 02:45:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:42.897 02:45:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:43.155 02:45:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:43.155 02:45:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:43.413 02:45:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:43.413 02:45:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:43.671 02:45:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:43.671 02:45:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:43.928 02:45:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:43.928 02:45:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 829008 00:19:43.928 02:45:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:43.928 02:45:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:44.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:44.862 02:45:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:44.862 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1216 -- # local i=0 00:19:44.862 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:19:44.862 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:44.862 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:44.862 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:19:44.862 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1228 -- # return 0 00:19:44.862 02:45:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:44.862 02:45:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:44.862 nvmf hotplug test: fio failed as expected 00:19:44.862 02:45:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:45.120 rmmod nvme_rdma 00:19:45.120 rmmod nvme_fabrics 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 826655 ']' 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 826655 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' -z 826655 ']' 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@951 -- # kill -0 826655 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # uname 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:45.120 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 826655 00:19:45.379 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:19:45.379 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:19:45.379 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 826655' 00:19:45.379 killing process with pid 826655 00:19:45.379 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@966 -- # kill 826655 00:19:45.379 [2024-05-15 02:45:48.424864] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:45.379 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@971 -- # wait 826655 00:19:45.379 [2024-05-15 02:45:48.532963] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:19:45.637 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:45.638 02:45:48 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:45.638 00:19:45.638 real 0m27.637s 00:19:45.638 user 1m44.891s 00:19:45.638 sys 0m10.341s 00:19:45.638 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:45.638 02:45:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.638 ************************************ 00:19:45.638 END TEST nvmf_fio_target 00:19:45.638 ************************************ 00:19:45.638 02:45:48 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:19:45.638 02:45:48 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:45.638 02:45:48 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:45.638 02:45:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:45.638 ************************************ 00:19:45.638 START TEST nvmf_bdevio 00:19:45.638 ************************************ 00:19:45.638 02:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:19:45.896 * Looking for test storage... 00:19:45.896 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:45.896 02:45:48 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.896 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:45.896 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.896 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.896 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.896 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.896 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.896 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:45.897 02:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:52.465 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:52.465 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:52.465 Found net devices under 0000:18:00.0: mlx_0_0 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:52.465 Found net devices under 0000:18:00.1: mlx_0_1 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:52.465 02:45:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:52.465 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:52.465 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:52.465 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:52.466 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.466 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:19:52.466 altname enp24s0f0np0 00:19:52.466 altname ens785f0np0 00:19:52.466 inet 192.168.100.8/24 scope global mlx_0_0 00:19:52.466 valid_lft forever preferred_lft forever 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:52.466 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.466 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:19:52.466 altname enp24s0f1np1 00:19:52.466 altname ens785f1np1 00:19:52.466 inet 192.168.100.9/24 scope global mlx_0_1 00:19:52.466 valid_lft forever preferred_lft forever 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:52.466 192.168.100.9' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:52.466 192.168.100.9' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:52.466 192.168.100.9' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=832949 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 832949 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@828 -- # '[' -z 832949 ']' 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:52.466 [2024-05-15 02:45:55.275999] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:19:52.466 [2024-05-15 02:45:55.276072] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.466 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.466 [2024-05-15 02:45:55.386989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.466 [2024-05-15 02:45:55.434283] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.466 [2024-05-15 02:45:55.434333] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.466 [2024-05-15 02:45:55.434347] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.466 [2024-05-15 02:45:55.434360] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.466 [2024-05-15 02:45:55.434371] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.466 [2024-05-15 02:45:55.434493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:52.466 [2024-05-15 02:45:55.434993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:52.466 [2024-05-15 02:45:55.435080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:52.466 [2024-05-15 02:45:55.435081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@861 -- # return 0 00:19:52.466 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:52.467 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:52.467 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:52.467 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.467 02:45:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:52.467 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.467 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:52.467 [2024-05-15 02:45:55.622094] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x148d650/0x1491b40) succeed. 00:19:52.467 [2024-05-15 02:45:55.637092] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x148ec90/0x14d31d0) succeed. 00:19:52.725 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.725 02:45:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:52.725 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.725 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:52.725 Malloc0 00:19:52.725 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.725 02:45:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:52.726 [2024-05-15 02:45:55.834625] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:52.726 [2024-05-15 02:45:55.835002] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:52.726 { 00:19:52.726 "params": { 00:19:52.726 "name": "Nvme$subsystem", 00:19:52.726 "trtype": "$TEST_TRANSPORT", 00:19:52.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:52.726 "adrfam": "ipv4", 00:19:52.726 "trsvcid": "$NVMF_PORT", 00:19:52.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:52.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:52.726 "hdgst": ${hdgst:-false}, 00:19:52.726 "ddgst": ${ddgst:-false} 00:19:52.726 }, 00:19:52.726 "method": "bdev_nvme_attach_controller" 00:19:52.726 } 00:19:52.726 EOF 00:19:52.726 )") 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:52.726 02:45:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:52.726 "params": { 00:19:52.726 "name": "Nvme1", 00:19:52.726 "trtype": "rdma", 00:19:52.726 "traddr": "192.168.100.8", 00:19:52.726 "adrfam": "ipv4", 00:19:52.726 "trsvcid": "4420", 00:19:52.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.726 "hdgst": false, 00:19:52.726 "ddgst": false 00:19:52.726 }, 00:19:52.726 "method": "bdev_nvme_attach_controller" 00:19:52.726 }' 00:19:52.726 [2024-05-15 02:45:55.885859] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:19:52.726 [2024-05-15 02:45:55.885935] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832977 ] 00:19:52.726 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.726 [2024-05-15 02:45:55.994748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:52.984 [2024-05-15 02:45:56.045206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.984 [2024-05-15 02:45:56.045292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.984 [2024-05-15 02:45:56.045297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.984 I/O targets: 00:19:52.984 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:52.984 00:19:52.984 00:19:52.984 CUnit - A unit testing framework for C - Version 2.1-3 00:19:52.984 http://cunit.sourceforge.net/ 00:19:52.984 00:19:52.984 00:19:52.984 Suite: bdevio tests on: Nvme1n1 00:19:52.984 Test: blockdev write read block ...passed 00:19:52.984 Test: blockdev write zeroes read block ...passed 00:19:52.984 Test: blockdev write zeroes read no split ...passed 00:19:52.984 Test: blockdev write zeroes read split ...passed 00:19:52.984 Test: blockdev write zeroes read split partial ...passed 00:19:52.984 Test: blockdev reset ...[2024-05-15 02:45:56.254073] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.243 [2024-05-15 02:45:56.283409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:53.243 [2024-05-15 02:45:56.313769] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:53.243 passed 00:19:53.243 Test: blockdev write read 8 blocks ...passed 00:19:53.243 Test: blockdev write read size > 128k ...passed 00:19:53.243 Test: blockdev write read invalid size ...passed 00:19:53.243 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:53.243 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:53.243 Test: blockdev write read max offset ...passed 00:19:53.243 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:53.243 Test: blockdev writev readv 8 blocks ...passed 00:19:53.243 Test: blockdev writev readv 30 x 1block ...passed 00:19:53.243 Test: blockdev writev readv block ...passed 00:19:53.243 Test: blockdev writev readv size > 128k ...passed 00:19:53.243 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:53.243 Test: blockdev comparev and writev ...[2024-05-15 02:45:56.317483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.243 [2024-05-15 02:45:56.317513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.243 [2024-05-15 02:45:56.317526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.243 [2024-05-15 02:45:56.317538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.243 [2024-05-15 02:45:56.317758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.243 [2024-05-15 02:45:56.317770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.243 [2024-05-15 02:45:56.317781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.243 [2024-05-15 02:45:56.317790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.243 [2024-05-15 02:45:56.318018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.243 [2024-05-15 02:45:56.318033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.244 [2024-05-15 02:45:56.318044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.244 [2024-05-15 02:45:56.318054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.244 [2024-05-15 02:45:56.318263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.244 [2024-05-15 02:45:56.318274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.244 [2024-05-15 02:45:56.318285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:53.244 [2024-05-15 02:45:56.318294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.244 passed 00:19:53.244 Test: blockdev nvme passthru rw ...passed 00:19:53.244 Test: blockdev nvme passthru vendor specific ...[2024-05-15 02:45:56.318637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:53.244 [2024-05-15 02:45:56.318650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.244 [2024-05-15 02:45:56.318702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:53.244 [2024-05-15 02:45:56.318712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.244 [2024-05-15 02:45:56.318762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:53.244 [2024-05-15 02:45:56.318772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.244 [2024-05-15 02:45:56.318833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:19:53.244 [2024-05-15 02:45:56.318843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.244 passed 00:19:53.244 Test: blockdev nvme admin passthru ...passed 00:19:53.244 Test: blockdev copy ...passed 00:19:53.244 00:19:53.244 Run Summary: Type Total Ran Passed Failed Inactive 00:19:53.244 suites 1 1 n/a 0 0 00:19:53.244 tests 23 23 23 0 0 00:19:53.244 asserts 152 152 152 0 n/a 00:19:53.244 00:19:53.244 Elapsed time = 0.203 seconds 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:53.502 rmmod nvme_rdma 00:19:53.502 rmmod nvme_fabrics 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 832949 ']' 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 832949 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' -z 832949 ']' 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@951 -- # kill -0 832949 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # uname 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 832949 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@965 -- # echo 'killing process with pid 832949' 00:19:53.502 killing process with pid 832949 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@966 -- # kill 832949 00:19:53.502 [2024-05-15 02:45:56.639969] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:53.502 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@971 -- # wait 832949 00:19:53.502 [2024-05-15 02:45:56.752146] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:19:53.762 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.762 02:45:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:53.762 00:19:53.762 real 0m8.150s 00:19:53.762 user 0m8.818s 00:19:53.762 sys 0m5.490s 00:19:53.762 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:53.762 02:45:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:53.762 ************************************ 00:19:53.762 END TEST nvmf_bdevio 00:19:53.762 ************************************ 00:19:53.762 02:45:57 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:19:53.762 02:45:57 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:53.762 02:45:57 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:53.762 02:45:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:54.022 ************************************ 00:19:54.022 START TEST nvmf_auth_target 00:19:54.022 ************************************ 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:19:54.022 * Looking for test storage... 00:19:54.022 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:54.022 02:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:54.023 02:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:00.689 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:00.689 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:00.689 Found net devices under 0000:18:00.0: mlx_0_0 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:00.689 Found net devices under 0000:18:00.1: mlx_0_1 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:00.689 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:00.689 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:20:00.689 altname enp24s0f0np0 00:20:00.689 altname ens785f0np0 00:20:00.689 inet 192.168.100.8/24 scope global mlx_0_0 00:20:00.689 valid_lft forever preferred_lft forever 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:00.689 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:00.689 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:20:00.689 altname enp24s0f1np1 00:20:00.689 altname ens785f1np1 00:20:00.689 inet 192.168.100.9/24 scope global mlx_0_1 00:20:00.689 valid_lft forever preferred_lft forever 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:00.689 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:00.690 192.168.100.9' 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:00.690 192.168.100.9' 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:00.690 192.168.100.9' 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=836053 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 836053 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 836053 ']' 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:00.690 02:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=836235 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=019541e17bad8bab41466a7b5c1a45e505f8d2e022d58d16 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uFg 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 019541e17bad8bab41466a7b5c1a45e505f8d2e022d58d16 0 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 019541e17bad8bab41466a7b5c1a45e505f8d2e022d58d16 0 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=019541e17bad8bab41466a7b5c1a45e505f8d2e022d58d16 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uFg 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uFg 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.uFg 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b9aab56a0927652ca267fcd3a1eb0e6c 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Sbp 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b9aab56a0927652ca267fcd3a1eb0e6c 1 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b9aab56a0927652ca267fcd3a1eb0e6c 1 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b9aab56a0927652ca267fcd3a1eb0e6c 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Sbp 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Sbp 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.Sbp 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=807850b8eefff69972d6316cc49538d6fbfdef3f739be27f 00:20:01.629 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.odW 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 807850b8eefff69972d6316cc49538d6fbfdef3f739be27f 2 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 807850b8eefff69972d6316cc49538d6fbfdef3f739be27f 2 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=807850b8eefff69972d6316cc49538d6fbfdef3f739be27f 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.odW 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.odW 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.odW 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9abffa2e914c1abfcad231a46d5202d1a5b5f97a60b8a047b59945e3ea031e79 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.vTU 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9abffa2e914c1abfcad231a46d5202d1a5b5f97a60b8a047b59945e3ea031e79 3 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9abffa2e914c1abfcad231a46d5202d1a5b5f97a60b8a047b59945e3ea031e79 3 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9abffa2e914c1abfcad231a46d5202d1a5b5f97a60b8a047b59945e3ea031e79 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:01.889 02:46:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:01.889 02:46:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.vTU 00:20:01.889 02:46:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.vTU 00:20:01.889 02:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.vTU 00:20:01.889 02:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 836053 00:20:01.889 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 836053 ']' 00:20:01.889 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.889 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:01.889 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.889 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:01.889 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.148 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:02.148 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:20:02.148 02:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 836235 /var/tmp/host.sock 00:20:02.148 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 836235 ']' 00:20:02.148 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/host.sock 00:20:02.148 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:02.148 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:02.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:02.148 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:02.148 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uFg 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uFg 00:20:02.408 02:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uFg 00:20:02.668 02:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:02.668 02:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Sbp 00:20:02.668 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.668 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.668 02:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.668 02:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Sbp 00:20:02.668 02:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Sbp 00:20:02.927 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:02.927 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.odW 00:20:02.927 02:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.927 02:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.927 02:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.927 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.odW 00:20:02.927 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.odW 00:20:03.186 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:03.186 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.vTU 00:20:03.186 02:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.186 02:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.186 02:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.186 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.vTU 00:20:03.186 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.vTU 00:20:03.446 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:20:03.446 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.446 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:03.446 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:03.446 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:03.705 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:20:03.705 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:03.705 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.705 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:03.705 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.705 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:20:03.705 02:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.705 02:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.705 02:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.705 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:03.705 02:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:03.965 00:20:03.965 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:03.965 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:03.965 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.224 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.224 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.224 02:46:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.224 02:46:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.224 02:46:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.224 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:04.224 { 00:20:04.224 "cntlid": 1, 00:20:04.224 "qid": 0, 00:20:04.224 "state": "enabled", 00:20:04.224 "listen_address": { 00:20:04.224 "trtype": "RDMA", 00:20:04.224 "adrfam": "IPv4", 00:20:04.224 "traddr": "192.168.100.8", 00:20:04.224 "trsvcid": "4420" 00:20:04.224 }, 00:20:04.224 "peer_address": { 00:20:04.224 "trtype": "RDMA", 00:20:04.224 "adrfam": "IPv4", 00:20:04.224 "traddr": "192.168.100.8", 00:20:04.224 "trsvcid": "59929" 00:20:04.224 }, 00:20:04.224 "auth": { 00:20:04.224 "state": "completed", 00:20:04.224 "digest": "sha256", 00:20:04.224 "dhgroup": "null" 00:20:04.224 } 00:20:04.224 } 00:20:04.224 ]' 00:20:04.224 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:04.483 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.483 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:04.483 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:04.483 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:04.483 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.483 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.483 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.743 02:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:20:10.016 02:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.017 02:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:10.017 02:46:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.017 02:46:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.017 02:46:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.017 02:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:10.017 02:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:10.017 02:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:10.017 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:20:10.017 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:10.017 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.017 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:10.017 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.017 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:20:10.017 02:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.017 02:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.017 02:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.017 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:10.017 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:10.276 00:20:10.534 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:10.534 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:10.534 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.534 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.534 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.534 02:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.534 02:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.792 02:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.792 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:10.792 { 00:20:10.792 "cntlid": 3, 00:20:10.792 "qid": 0, 00:20:10.792 "state": "enabled", 00:20:10.792 "listen_address": { 00:20:10.792 "trtype": "RDMA", 00:20:10.792 "adrfam": "IPv4", 00:20:10.792 "traddr": "192.168.100.8", 00:20:10.792 "trsvcid": "4420" 00:20:10.792 }, 00:20:10.792 "peer_address": { 00:20:10.792 "trtype": "RDMA", 00:20:10.792 "adrfam": "IPv4", 00:20:10.792 "traddr": "192.168.100.8", 00:20:10.792 "trsvcid": "58989" 00:20:10.792 }, 00:20:10.792 "auth": { 00:20:10.792 "state": "completed", 00:20:10.792 "digest": "sha256", 00:20:10.792 "dhgroup": "null" 00:20:10.792 } 00:20:10.792 } 00:20:10.792 ]' 00:20:10.792 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:10.792 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.792 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:10.792 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:10.792 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:10.792 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.792 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.792 02:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.051 02:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:20:11.985 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.985 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:11.985 02:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.985 02:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.985 02:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.985 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:11.985 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:11.985 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:12.244 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:20:12.244 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:12.244 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:12.244 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:12.244 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.244 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:20:12.244 02:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.244 02:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.244 02:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.244 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:12.244 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:12.503 00:20:12.503 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:12.503 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:12.503 02:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.762 02:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.762 02:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.762 02:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.762 02:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.762 02:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.762 02:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:12.762 { 00:20:12.762 "cntlid": 5, 00:20:12.762 "qid": 0, 00:20:12.762 "state": "enabled", 00:20:12.762 "listen_address": { 00:20:12.762 "trtype": "RDMA", 00:20:12.762 "adrfam": "IPv4", 00:20:12.762 "traddr": "192.168.100.8", 00:20:12.762 "trsvcid": "4420" 00:20:12.762 }, 00:20:12.762 "peer_address": { 00:20:12.762 "trtype": "RDMA", 00:20:12.762 "adrfam": "IPv4", 00:20:12.762 "traddr": "192.168.100.8", 00:20:12.762 "trsvcid": "42671" 00:20:12.762 }, 00:20:12.762 "auth": { 00:20:12.762 "state": "completed", 00:20:12.762 "digest": "sha256", 00:20:12.762 "dhgroup": "null" 00:20:12.762 } 00:20:12.762 } 00:20:12.762 ]' 00:20:12.762 02:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:13.021 02:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.021 02:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:13.021 02:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:13.021 02:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:13.021 02:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.021 02:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.021 02:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.280 02:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:20:14.216 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.216 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:14.216 02:46:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.216 02:46:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.216 02:46:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.216 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:14.216 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:14.216 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:14.475 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:20:14.475 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:14.475 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:14.475 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:14.475 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:14.475 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:20:14.475 02:46:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.475 02:46:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.475 02:46:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.475 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.475 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.734 00:20:14.734 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:14.734 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:14.734 02:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:14.993 { 00:20:14.993 "cntlid": 7, 00:20:14.993 "qid": 0, 00:20:14.993 "state": "enabled", 00:20:14.993 "listen_address": { 00:20:14.993 "trtype": "RDMA", 00:20:14.993 "adrfam": "IPv4", 00:20:14.993 "traddr": "192.168.100.8", 00:20:14.993 "trsvcid": "4420" 00:20:14.993 }, 00:20:14.993 "peer_address": { 00:20:14.993 "trtype": "RDMA", 00:20:14.993 "adrfam": "IPv4", 00:20:14.993 "traddr": "192.168.100.8", 00:20:14.993 "trsvcid": "57813" 00:20:14.993 }, 00:20:14.993 "auth": { 00:20:14.993 "state": "completed", 00:20:14.993 "digest": "sha256", 00:20:14.993 "dhgroup": "null" 00:20:14.993 } 00:20:14.993 } 00:20:14.993 ]' 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.993 02:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.252 02:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:20:16.190 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.190 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:16.190 02:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.190 02:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.190 02:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.190 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.190 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:16.190 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:16.190 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:16.449 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:20:16.449 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:16.449 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:16.449 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:16.449 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:16.449 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:20:16.449 02:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.449 02:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.449 02:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.450 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:16.450 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:16.709 00:20:16.709 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:16.709 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:16.709 02:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.968 02:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.968 02:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.968 02:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.968 02:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.968 02:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.968 02:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:16.968 { 00:20:16.968 "cntlid": 9, 00:20:16.968 "qid": 0, 00:20:16.968 "state": "enabled", 00:20:16.968 "listen_address": { 00:20:16.968 "trtype": "RDMA", 00:20:16.968 "adrfam": "IPv4", 00:20:16.968 "traddr": "192.168.100.8", 00:20:16.968 "trsvcid": "4420" 00:20:16.968 }, 00:20:16.968 "peer_address": { 00:20:16.968 "trtype": "RDMA", 00:20:16.968 "adrfam": "IPv4", 00:20:16.968 "traddr": "192.168.100.8", 00:20:16.968 "trsvcid": "36974" 00:20:16.968 }, 00:20:16.968 "auth": { 00:20:16.968 "state": "completed", 00:20:16.968 "digest": "sha256", 00:20:16.968 "dhgroup": "ffdhe2048" 00:20:16.968 } 00:20:16.968 } 00:20:16.968 ]' 00:20:16.968 02:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:17.228 02:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.228 02:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:17.228 02:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:17.228 02:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:17.228 02:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.228 02:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.228 02:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.487 02:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:20:18.424 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.424 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:18.425 02:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.425 02:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.425 02:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.425 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:18.425 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:18.425 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:18.685 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:20:18.685 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:18.685 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:18.685 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:18.685 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:18.685 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:20:18.685 02:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.685 02:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.685 02:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.685 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:18.685 02:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:18.944 00:20:18.944 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:18.944 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:18.944 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:19.203 { 00:20:19.203 "cntlid": 11, 00:20:19.203 "qid": 0, 00:20:19.203 "state": "enabled", 00:20:19.203 "listen_address": { 00:20:19.203 "trtype": "RDMA", 00:20:19.203 "adrfam": "IPv4", 00:20:19.203 "traddr": "192.168.100.8", 00:20:19.203 "trsvcid": "4420" 00:20:19.203 }, 00:20:19.203 "peer_address": { 00:20:19.203 "trtype": "RDMA", 00:20:19.203 "adrfam": "IPv4", 00:20:19.203 "traddr": "192.168.100.8", 00:20:19.203 "trsvcid": "55033" 00:20:19.203 }, 00:20:19.203 "auth": { 00:20:19.203 "state": "completed", 00:20:19.203 "digest": "sha256", 00:20:19.203 "dhgroup": "ffdhe2048" 00:20:19.203 } 00:20:19.203 } 00:20:19.203 ]' 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.203 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.462 02:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:20:20.400 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.400 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:20.400 02:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.400 02:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.400 02:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.400 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:20.400 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:20.400 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:20.659 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:20:20.660 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:20.660 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:20.660 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:20.660 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:20.660 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:20:20.660 02:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.660 02:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.660 02:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.660 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:20.660 02:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:20.919 00:20:20.919 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:20.919 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:20.919 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.179 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.179 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.179 02:46:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.179 02:46:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.179 02:46:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.179 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:21.179 { 00:20:21.179 "cntlid": 13, 00:20:21.179 "qid": 0, 00:20:21.179 "state": "enabled", 00:20:21.179 "listen_address": { 00:20:21.179 "trtype": "RDMA", 00:20:21.179 "adrfam": "IPv4", 00:20:21.179 "traddr": "192.168.100.8", 00:20:21.179 "trsvcid": "4420" 00:20:21.179 }, 00:20:21.179 "peer_address": { 00:20:21.179 "trtype": "RDMA", 00:20:21.179 "adrfam": "IPv4", 00:20:21.179 "traddr": "192.168.100.8", 00:20:21.179 "trsvcid": "39753" 00:20:21.179 }, 00:20:21.179 "auth": { 00:20:21.179 "state": "completed", 00:20:21.179 "digest": "sha256", 00:20:21.179 "dhgroup": "ffdhe2048" 00:20:21.179 } 00:20:21.179 } 00:20:21.179 ]' 00:20:21.179 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:21.179 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.179 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:21.179 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.179 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:21.438 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.438 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.438 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.817 02:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:20:22.387 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.646 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:22.646 02:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.646 02:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.646 02:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.646 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:22.646 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.646 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.905 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:20:22.905 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:22.905 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:22.905 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:22.905 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:22.905 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:20:22.905 02:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.905 02:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.905 02:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.905 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.905 02:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.163 00:20:23.163 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:23.163 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:23.163 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:23.422 { 00:20:23.422 "cntlid": 15, 00:20:23.422 "qid": 0, 00:20:23.422 "state": "enabled", 00:20:23.422 "listen_address": { 00:20:23.422 "trtype": "RDMA", 00:20:23.422 "adrfam": "IPv4", 00:20:23.422 "traddr": "192.168.100.8", 00:20:23.422 "trsvcid": "4420" 00:20:23.422 }, 00:20:23.422 "peer_address": { 00:20:23.422 "trtype": "RDMA", 00:20:23.422 "adrfam": "IPv4", 00:20:23.422 "traddr": "192.168.100.8", 00:20:23.422 "trsvcid": "48784" 00:20:23.422 }, 00:20:23.422 "auth": { 00:20:23.422 "state": "completed", 00:20:23.422 "digest": "sha256", 00:20:23.422 "dhgroup": "ffdhe2048" 00:20:23.422 } 00:20:23.422 } 00:20:23.422 ]' 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.422 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.682 02:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:20:24.619 02:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.620 02:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:24.620 02:46:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.620 02:46:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.620 02:46:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.620 02:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.620 02:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:24.620 02:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.620 02:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.879 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:20:24.879 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:24.879 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.879 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:24.879 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:24.879 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:20:24.879 02:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.879 02:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.138 02:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.138 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:25.139 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:25.397 00:20:25.397 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:25.397 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.397 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:25.656 { 00:20:25.656 "cntlid": 17, 00:20:25.656 "qid": 0, 00:20:25.656 "state": "enabled", 00:20:25.656 "listen_address": { 00:20:25.656 "trtype": "RDMA", 00:20:25.656 "adrfam": "IPv4", 00:20:25.656 "traddr": "192.168.100.8", 00:20:25.656 "trsvcid": "4420" 00:20:25.656 }, 00:20:25.656 "peer_address": { 00:20:25.656 "trtype": "RDMA", 00:20:25.656 "adrfam": "IPv4", 00:20:25.656 "traddr": "192.168.100.8", 00:20:25.656 "trsvcid": "32916" 00:20:25.656 }, 00:20:25.656 "auth": { 00:20:25.656 "state": "completed", 00:20:25.656 "digest": "sha256", 00:20:25.656 "dhgroup": "ffdhe3072" 00:20:25.656 } 00:20:25.656 } 00:20:25.656 ]' 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.656 02:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.914 02:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:20:26.852 02:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.852 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:26.852 02:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.852 02:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.852 02:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.852 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:26.852 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:26.852 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.111 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:20:27.111 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:27.111 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:27.111 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:27.111 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:27.111 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:20:27.111 02:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.111 02:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.111 02:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.111 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:27.111 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:27.370 00:20:27.370 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:27.370 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:27.371 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.630 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.630 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.630 02:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.630 02:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.630 02:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.630 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:27.630 { 00:20:27.630 "cntlid": 19, 00:20:27.630 "qid": 0, 00:20:27.630 "state": "enabled", 00:20:27.630 "listen_address": { 00:20:27.630 "trtype": "RDMA", 00:20:27.630 "adrfam": "IPv4", 00:20:27.631 "traddr": "192.168.100.8", 00:20:27.631 "trsvcid": "4420" 00:20:27.631 }, 00:20:27.631 "peer_address": { 00:20:27.631 "trtype": "RDMA", 00:20:27.631 "adrfam": "IPv4", 00:20:27.631 "traddr": "192.168.100.8", 00:20:27.631 "trsvcid": "48904" 00:20:27.631 }, 00:20:27.631 "auth": { 00:20:27.631 "state": "completed", 00:20:27.631 "digest": "sha256", 00:20:27.631 "dhgroup": "ffdhe3072" 00:20:27.631 } 00:20:27.631 } 00:20:27.631 ]' 00:20:27.631 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:27.890 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.890 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:27.890 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.890 02:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:27.890 02:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.890 02:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.890 02:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.150 02:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:20:29.087 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.087 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:29.087 02:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.087 02:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.087 02:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.087 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:29.087 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:29.087 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:29.347 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:20:29.347 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:29.347 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:29.347 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:29.347 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:29.347 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:20:29.347 02:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.347 02:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.347 02:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.347 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:29.347 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:29.606 00:20:29.606 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:29.606 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:29.606 02:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.865 02:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.865 02:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.865 02:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.865 02:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.865 02:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.865 02:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:29.865 { 00:20:29.865 "cntlid": 21, 00:20:29.865 "qid": 0, 00:20:29.865 "state": "enabled", 00:20:29.865 "listen_address": { 00:20:29.865 "trtype": "RDMA", 00:20:29.865 "adrfam": "IPv4", 00:20:29.865 "traddr": "192.168.100.8", 00:20:29.865 "trsvcid": "4420" 00:20:29.865 }, 00:20:29.865 "peer_address": { 00:20:29.865 "trtype": "RDMA", 00:20:29.865 "adrfam": "IPv4", 00:20:29.865 "traddr": "192.168.100.8", 00:20:29.865 "trsvcid": "56774" 00:20:29.865 }, 00:20:29.865 "auth": { 00:20:29.865 "state": "completed", 00:20:29.865 "digest": "sha256", 00:20:29.865 "dhgroup": "ffdhe3072" 00:20:29.865 } 00:20:29.865 } 00:20:29.865 ]' 00:20:29.865 02:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:29.865 02:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.865 02:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:29.865 02:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.865 02:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:30.125 02:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.125 02:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.125 02:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.125 02:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:20:31.069 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.069 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:31.069 02:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.069 02:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.069 02:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.069 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:31.069 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:31.069 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:31.328 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:20:31.328 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:31.328 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:31.328 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:31.328 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:31.328 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:20:31.328 02:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.328 02:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.328 02:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.328 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.328 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.587 00:20:31.847 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:31.847 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:31.847 02:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.106 02:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.106 02:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.106 02:46:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.107 02:46:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.107 02:46:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.107 02:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:32.107 { 00:20:32.107 "cntlid": 23, 00:20:32.107 "qid": 0, 00:20:32.107 "state": "enabled", 00:20:32.107 "listen_address": { 00:20:32.107 "trtype": "RDMA", 00:20:32.107 "adrfam": "IPv4", 00:20:32.107 "traddr": "192.168.100.8", 00:20:32.107 "trsvcid": "4420" 00:20:32.107 }, 00:20:32.107 "peer_address": { 00:20:32.107 "trtype": "RDMA", 00:20:32.107 "adrfam": "IPv4", 00:20:32.107 "traddr": "192.168.100.8", 00:20:32.107 "trsvcid": "41555" 00:20:32.107 }, 00:20:32.107 "auth": { 00:20:32.107 "state": "completed", 00:20:32.107 "digest": "sha256", 00:20:32.107 "dhgroup": "ffdhe3072" 00:20:32.107 } 00:20:32.107 } 00:20:32.107 ]' 00:20:32.107 02:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:32.107 02:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.107 02:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:32.107 02:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.107 02:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:32.107 02:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.107 02:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.107 02:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.366 02:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:20:33.304 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.304 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:33.304 02:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.304 02:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.304 02:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.304 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.304 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:33.304 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.304 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.564 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:20:33.564 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:33.564 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:33.564 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:33.564 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:33.564 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:20:33.564 02:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.564 02:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.564 02:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.564 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:33.564 02:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:33.824 00:20:34.084 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:34.084 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:34.084 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.084 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.084 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.084 02:46:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.084 02:46:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.343 02:46:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.343 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:34.343 { 00:20:34.343 "cntlid": 25, 00:20:34.343 "qid": 0, 00:20:34.343 "state": "enabled", 00:20:34.343 "listen_address": { 00:20:34.343 "trtype": "RDMA", 00:20:34.343 "adrfam": "IPv4", 00:20:34.343 "traddr": "192.168.100.8", 00:20:34.343 "trsvcid": "4420" 00:20:34.343 }, 00:20:34.343 "peer_address": { 00:20:34.343 "trtype": "RDMA", 00:20:34.343 "adrfam": "IPv4", 00:20:34.343 "traddr": "192.168.100.8", 00:20:34.343 "trsvcid": "38399" 00:20:34.343 }, 00:20:34.343 "auth": { 00:20:34.343 "state": "completed", 00:20:34.343 "digest": "sha256", 00:20:34.343 "dhgroup": "ffdhe4096" 00:20:34.343 } 00:20:34.343 } 00:20:34.343 ]' 00:20:34.343 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:34.343 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.343 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:34.343 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.344 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:34.344 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.344 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.344 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.603 02:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:20:35.542 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.542 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:35.542 02:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.542 02:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.542 02:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.542 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:35.542 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.542 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.802 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:20:35.802 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:35.802 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:35.802 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:35.802 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:35.802 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:20:35.802 02:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.802 02:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.802 02:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.802 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:35.802 02:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:36.061 00:20:36.061 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:36.061 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:36.061 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.320 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.320 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.320 02:46:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.320 02:46:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.320 02:46:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.320 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:36.320 { 00:20:36.320 "cntlid": 27, 00:20:36.320 "qid": 0, 00:20:36.320 "state": "enabled", 00:20:36.320 "listen_address": { 00:20:36.320 "trtype": "RDMA", 00:20:36.320 "adrfam": "IPv4", 00:20:36.320 "traddr": "192.168.100.8", 00:20:36.320 "trsvcid": "4420" 00:20:36.320 }, 00:20:36.320 "peer_address": { 00:20:36.320 "trtype": "RDMA", 00:20:36.320 "adrfam": "IPv4", 00:20:36.320 "traddr": "192.168.100.8", 00:20:36.320 "trsvcid": "45264" 00:20:36.320 }, 00:20:36.320 "auth": { 00:20:36.320 "state": "completed", 00:20:36.320 "digest": "sha256", 00:20:36.320 "dhgroup": "ffdhe4096" 00:20:36.320 } 00:20:36.320 } 00:20:36.320 ]' 00:20:36.320 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:36.320 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.320 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:36.580 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:36.580 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:36.580 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.580 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.580 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.839 02:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:20:37.775 02:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.775 02:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:37.775 02:46:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:37.775 02:46:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.775 02:46:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.775 02:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:37.775 02:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:37.775 02:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:38.035 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:20:38.035 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:38.035 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:38.035 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:38.035 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:38.035 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:20:38.035 02:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.035 02:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.035 02:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.035 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:38.035 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:38.294 00:20:38.294 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:38.294 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:38.294 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.552 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.552 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.552 02:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.552 02:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.552 02:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.552 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:38.552 { 00:20:38.552 "cntlid": 29, 00:20:38.552 "qid": 0, 00:20:38.552 "state": "enabled", 00:20:38.552 "listen_address": { 00:20:38.552 "trtype": "RDMA", 00:20:38.552 "adrfam": "IPv4", 00:20:38.552 "traddr": "192.168.100.8", 00:20:38.552 "trsvcid": "4420" 00:20:38.552 }, 00:20:38.552 "peer_address": { 00:20:38.552 "trtype": "RDMA", 00:20:38.552 "adrfam": "IPv4", 00:20:38.552 "traddr": "192.168.100.8", 00:20:38.552 "trsvcid": "51853" 00:20:38.552 }, 00:20:38.552 "auth": { 00:20:38.552 "state": "completed", 00:20:38.552 "digest": "sha256", 00:20:38.552 "dhgroup": "ffdhe4096" 00:20:38.552 } 00:20:38.552 } 00:20:38.552 ]' 00:20:38.552 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:38.552 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.552 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:38.552 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.552 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:38.810 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.810 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.810 02:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.134 02:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:20:39.702 02:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.961 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:39.961 02:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.961 02:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.961 02:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.961 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:39.961 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:39.961 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:40.220 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:20:40.220 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:40.220 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:40.220 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:40.220 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:40.220 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:20:40.220 02:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.220 02:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.220 02:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.220 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.220 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.478 00:20:40.478 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:40.479 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:40.479 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.737 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.737 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.737 02:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.737 02:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.737 02:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.737 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:40.737 { 00:20:40.737 "cntlid": 31, 00:20:40.737 "qid": 0, 00:20:40.737 "state": "enabled", 00:20:40.737 "listen_address": { 00:20:40.737 "trtype": "RDMA", 00:20:40.737 "adrfam": "IPv4", 00:20:40.737 "traddr": "192.168.100.8", 00:20:40.737 "trsvcid": "4420" 00:20:40.737 }, 00:20:40.737 "peer_address": { 00:20:40.737 "trtype": "RDMA", 00:20:40.737 "adrfam": "IPv4", 00:20:40.737 "traddr": "192.168.100.8", 00:20:40.737 "trsvcid": "37407" 00:20:40.737 }, 00:20:40.737 "auth": { 00:20:40.737 "state": "completed", 00:20:40.737 "digest": "sha256", 00:20:40.737 "dhgroup": "ffdhe4096" 00:20:40.737 } 00:20:40.737 } 00:20:40.737 ]' 00:20:40.737 02:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:40.737 02:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.737 02:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:40.995 02:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.995 02:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:40.995 02:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.995 02:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.995 02:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.306 02:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:20:41.906 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.165 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:42.165 02:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.165 02:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.165 02:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.165 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.165 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:42.165 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:42.165 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:42.424 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:20:42.424 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:42.424 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:42.424 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:42.424 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:42.424 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:20:42.424 02:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.424 02:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.424 02:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.424 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:42.424 02:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:42.990 00:20:42.990 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:42.990 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:42.990 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.990 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.990 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.990 02:46:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.990 02:46:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.248 02:46:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.248 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:43.248 { 00:20:43.248 "cntlid": 33, 00:20:43.248 "qid": 0, 00:20:43.248 "state": "enabled", 00:20:43.248 "listen_address": { 00:20:43.248 "trtype": "RDMA", 00:20:43.248 "adrfam": "IPv4", 00:20:43.248 "traddr": "192.168.100.8", 00:20:43.248 "trsvcid": "4420" 00:20:43.248 }, 00:20:43.248 "peer_address": { 00:20:43.249 "trtype": "RDMA", 00:20:43.249 "adrfam": "IPv4", 00:20:43.249 "traddr": "192.168.100.8", 00:20:43.249 "trsvcid": "55448" 00:20:43.249 }, 00:20:43.249 "auth": { 00:20:43.249 "state": "completed", 00:20:43.249 "digest": "sha256", 00:20:43.249 "dhgroup": "ffdhe6144" 00:20:43.249 } 00:20:43.249 } 00:20:43.249 ]' 00:20:43.249 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:43.249 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:43.249 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:43.249 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:43.249 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:43.249 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.249 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.249 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.507 02:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:20:44.443 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.443 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:44.443 02:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.443 02:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.443 02:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.443 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:44.443 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:44.443 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:44.702 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:20:44.702 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:44.702 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:44.702 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:44.703 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:44.703 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:20:44.703 02:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.703 02:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.703 02:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.703 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:44.703 02:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:44.962 00:20:45.221 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:45.221 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:45.221 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:45.481 { 00:20:45.481 "cntlid": 35, 00:20:45.481 "qid": 0, 00:20:45.481 "state": "enabled", 00:20:45.481 "listen_address": { 00:20:45.481 "trtype": "RDMA", 00:20:45.481 "adrfam": "IPv4", 00:20:45.481 "traddr": "192.168.100.8", 00:20:45.481 "trsvcid": "4420" 00:20:45.481 }, 00:20:45.481 "peer_address": { 00:20:45.481 "trtype": "RDMA", 00:20:45.481 "adrfam": "IPv4", 00:20:45.481 "traddr": "192.168.100.8", 00:20:45.481 "trsvcid": "55860" 00:20:45.481 }, 00:20:45.481 "auth": { 00:20:45.481 "state": "completed", 00:20:45.481 "digest": "sha256", 00:20:45.481 "dhgroup": "ffdhe6144" 00:20:45.481 } 00:20:45.481 } 00:20:45.481 ]' 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.481 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.740 02:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:20:46.676 02:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.676 02:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:46.676 02:46:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.676 02:46:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.676 02:46:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.676 02:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:46.676 02:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.676 02:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.936 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:20:46.936 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:46.936 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:46.936 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:46.936 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:46.936 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:20:46.936 02:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.936 02:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.936 02:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.936 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:46.936 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:47.503 00:20:47.503 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:47.503 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:47.503 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.762 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.762 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.762 02:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.762 02:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.762 02:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.762 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:47.762 { 00:20:47.762 "cntlid": 37, 00:20:47.762 "qid": 0, 00:20:47.762 "state": "enabled", 00:20:47.762 "listen_address": { 00:20:47.762 "trtype": "RDMA", 00:20:47.762 "adrfam": "IPv4", 00:20:47.762 "traddr": "192.168.100.8", 00:20:47.762 "trsvcid": "4420" 00:20:47.762 }, 00:20:47.762 "peer_address": { 00:20:47.762 "trtype": "RDMA", 00:20:47.762 "adrfam": "IPv4", 00:20:47.762 "traddr": "192.168.100.8", 00:20:47.762 "trsvcid": "36718" 00:20:47.762 }, 00:20:47.762 "auth": { 00:20:47.762 "state": "completed", 00:20:47.762 "digest": "sha256", 00:20:47.762 "dhgroup": "ffdhe6144" 00:20:47.762 } 00:20:47.762 } 00:20:47.762 ]' 00:20:47.762 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:47.762 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.762 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:47.762 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:47.762 02:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:47.762 02:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.762 02:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.762 02:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.030 02:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:20:48.967 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.967 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:48.967 02:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.967 02:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.967 02:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.967 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:48.967 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:48.967 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:49.225 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:20:49.225 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:49.225 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:49.225 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:49.225 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:49.225 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:20:49.225 02:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.226 02:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.226 02:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.226 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.226 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.793 00:20:49.793 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:49.793 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:49.793 02:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:50.053 { 00:20:50.053 "cntlid": 39, 00:20:50.053 "qid": 0, 00:20:50.053 "state": "enabled", 00:20:50.053 "listen_address": { 00:20:50.053 "trtype": "RDMA", 00:20:50.053 "adrfam": "IPv4", 00:20:50.053 "traddr": "192.168.100.8", 00:20:50.053 "trsvcid": "4420" 00:20:50.053 }, 00:20:50.053 "peer_address": { 00:20:50.053 "trtype": "RDMA", 00:20:50.053 "adrfam": "IPv4", 00:20:50.053 "traddr": "192.168.100.8", 00:20:50.053 "trsvcid": "55753" 00:20:50.053 }, 00:20:50.053 "auth": { 00:20:50.053 "state": "completed", 00:20:50.053 "digest": "sha256", 00:20:50.053 "dhgroup": "ffdhe6144" 00:20:50.053 } 00:20:50.053 } 00:20:50.053 ]' 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.053 02:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.317 02:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:20:51.251 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.252 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:51.252 02:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.252 02:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.510 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:51.511 02:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:52.078 00:20:52.078 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:52.078 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:52.078 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.337 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.337 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.337 02:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.337 02:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.337 02:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.337 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:52.337 { 00:20:52.337 "cntlid": 41, 00:20:52.337 "qid": 0, 00:20:52.337 "state": "enabled", 00:20:52.337 "listen_address": { 00:20:52.337 "trtype": "RDMA", 00:20:52.337 "adrfam": "IPv4", 00:20:52.337 "traddr": "192.168.100.8", 00:20:52.337 "trsvcid": "4420" 00:20:52.337 }, 00:20:52.337 "peer_address": { 00:20:52.337 "trtype": "RDMA", 00:20:52.337 "adrfam": "IPv4", 00:20:52.337 "traddr": "192.168.100.8", 00:20:52.337 "trsvcid": "47440" 00:20:52.337 }, 00:20:52.337 "auth": { 00:20:52.337 "state": "completed", 00:20:52.337 "digest": "sha256", 00:20:52.337 "dhgroup": "ffdhe8192" 00:20:52.337 } 00:20:52.337 } 00:20:52.337 ]' 00:20:52.337 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:52.595 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.595 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:52.595 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.595 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:52.595 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.595 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.595 02:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.854 02:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:20:53.792 02:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.792 02:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:53.792 02:46:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.792 02:46:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.792 02:46:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.792 02:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:53.792 02:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:53.792 02:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.052 02:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:20:54.052 02:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:54.052 02:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:54.052 02:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:54.052 02:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:54.052 02:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:20:54.052 02:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.052 02:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.052 02:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.052 02:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:54.052 02:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:54.620 00:20:54.620 02:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:54.620 02:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:54.620 02:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.879 02:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.879 02:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.879 02:46:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.879 02:46:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.879 02:46:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.879 02:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:54.879 { 00:20:54.879 "cntlid": 43, 00:20:54.879 "qid": 0, 00:20:54.879 "state": "enabled", 00:20:54.879 "listen_address": { 00:20:54.879 "trtype": "RDMA", 00:20:54.879 "adrfam": "IPv4", 00:20:54.879 "traddr": "192.168.100.8", 00:20:54.879 "trsvcid": "4420" 00:20:54.879 }, 00:20:54.879 "peer_address": { 00:20:54.879 "trtype": "RDMA", 00:20:54.879 "adrfam": "IPv4", 00:20:54.879 "traddr": "192.168.100.8", 00:20:54.879 "trsvcid": "49671" 00:20:54.879 }, 00:20:54.879 "auth": { 00:20:54.879 "state": "completed", 00:20:54.879 "digest": "sha256", 00:20:54.879 "dhgroup": "ffdhe8192" 00:20:54.879 } 00:20:54.879 } 00:20:54.879 ]' 00:20:54.879 02:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:55.140 02:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.140 02:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:55.140 02:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.140 02:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:55.140 02:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.140 02:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.140 02:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.399 02:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:20:56.337 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.337 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:56.337 02:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.337 02:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.337 02:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.337 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:56.337 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.337 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.598 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:20:56.598 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:56.598 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:56.598 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:56.598 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:56.598 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:20:56.598 02:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.598 02:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.598 02:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.598 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:56.598 02:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:57.167 00:20:57.167 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:57.167 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:57.167 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.426 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.426 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.426 02:47:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.426 02:47:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.426 02:47:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.426 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:57.426 { 00:20:57.426 "cntlid": 45, 00:20:57.426 "qid": 0, 00:20:57.426 "state": "enabled", 00:20:57.426 "listen_address": { 00:20:57.426 "trtype": "RDMA", 00:20:57.426 "adrfam": "IPv4", 00:20:57.426 "traddr": "192.168.100.8", 00:20:57.426 "trsvcid": "4420" 00:20:57.426 }, 00:20:57.426 "peer_address": { 00:20:57.426 "trtype": "RDMA", 00:20:57.426 "adrfam": "IPv4", 00:20:57.426 "traddr": "192.168.100.8", 00:20:57.426 "trsvcid": "39440" 00:20:57.426 }, 00:20:57.426 "auth": { 00:20:57.426 "state": "completed", 00:20:57.426 "digest": "sha256", 00:20:57.426 "dhgroup": "ffdhe8192" 00:20:57.426 } 00:20:57.426 } 00:20:57.426 ]' 00:20:57.426 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:57.686 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.686 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:57.686 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.686 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:57.686 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.686 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.686 02:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.945 02:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:20:58.882 02:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.882 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:20:58.882 02:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.882 02:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.882 02:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.882 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:58.882 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:58.882 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:59.141 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:20:59.141 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:59.141 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:59.141 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:59.141 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:59.141 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:20:59.141 02:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.141 02:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.141 02:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.141 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.141 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.711 00:20:59.711 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:59.711 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:59.711 02:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.970 02:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.970 02:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.970 02:47:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.970 02:47:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.970 02:47:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.970 02:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:59.970 { 00:20:59.970 "cntlid": 47, 00:20:59.970 "qid": 0, 00:20:59.970 "state": "enabled", 00:20:59.970 "listen_address": { 00:20:59.970 "trtype": "RDMA", 00:20:59.970 "adrfam": "IPv4", 00:20:59.970 "traddr": "192.168.100.8", 00:20:59.970 "trsvcid": "4420" 00:20:59.970 }, 00:20:59.970 "peer_address": { 00:20:59.970 "trtype": "RDMA", 00:20:59.970 "adrfam": "IPv4", 00:20:59.970 "traddr": "192.168.100.8", 00:20:59.970 "trsvcid": "51199" 00:20:59.970 }, 00:20:59.970 "auth": { 00:20:59.970 "state": "completed", 00:20:59.970 "digest": "sha256", 00:20:59.970 "dhgroup": "ffdhe8192" 00:20:59.970 } 00:20:59.970 } 00:20:59.970 ]' 00:20:59.970 02:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:59.970 02:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.970 02:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:00.229 02:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.229 02:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:00.229 02:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.229 02:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.229 02:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.486 02:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:21:01.120 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.379 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:01.379 02:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.379 02:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.380 02:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.380 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:21:01.380 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.380 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:01.380 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.380 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.637 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:21:01.637 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:01.637 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.637 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:01.637 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:01.637 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:21:01.637 02:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.637 02:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.637 02:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.637 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:01.637 02:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:01.895 00:21:01.895 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:01.895 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:01.895 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.153 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.153 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.153 02:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:02.153 02:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.153 02:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:02.153 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:02.153 { 00:21:02.153 "cntlid": 49, 00:21:02.153 "qid": 0, 00:21:02.153 "state": "enabled", 00:21:02.153 "listen_address": { 00:21:02.153 "trtype": "RDMA", 00:21:02.153 "adrfam": "IPv4", 00:21:02.153 "traddr": "192.168.100.8", 00:21:02.153 "trsvcid": "4420" 00:21:02.153 }, 00:21:02.153 "peer_address": { 00:21:02.153 "trtype": "RDMA", 00:21:02.153 "adrfam": "IPv4", 00:21:02.153 "traddr": "192.168.100.8", 00:21:02.153 "trsvcid": "60900" 00:21:02.153 }, 00:21:02.153 "auth": { 00:21:02.153 "state": "completed", 00:21:02.153 "digest": "sha384", 00:21:02.153 "dhgroup": "null" 00:21:02.153 } 00:21:02.153 } 00:21:02.153 ]' 00:21:02.153 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:02.153 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.154 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:02.154 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:02.154 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:02.154 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.154 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.154 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.412 02:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:21:03.348 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.348 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:03.348 02:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.348 02:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.348 02:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.348 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:03.348 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:03.348 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:03.607 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:21:03.607 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:03.607 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.607 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:03.607 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:03.607 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:21:03.607 02:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.607 02:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.607 02:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.607 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:03.607 02:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:03.865 00:21:03.865 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:03.865 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:03.865 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.123 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.123 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.123 02:47:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.123 02:47:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.123 02:47:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.123 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:04.123 { 00:21:04.123 "cntlid": 51, 00:21:04.123 "qid": 0, 00:21:04.123 "state": "enabled", 00:21:04.123 "listen_address": { 00:21:04.123 "trtype": "RDMA", 00:21:04.123 "adrfam": "IPv4", 00:21:04.123 "traddr": "192.168.100.8", 00:21:04.123 "trsvcid": "4420" 00:21:04.123 }, 00:21:04.123 "peer_address": { 00:21:04.123 "trtype": "RDMA", 00:21:04.123 "adrfam": "IPv4", 00:21:04.123 "traddr": "192.168.100.8", 00:21:04.123 "trsvcid": "48763" 00:21:04.123 }, 00:21:04.123 "auth": { 00:21:04.123 "state": "completed", 00:21:04.123 "digest": "sha384", 00:21:04.123 "dhgroup": "null" 00:21:04.123 } 00:21:04.123 } 00:21:04.123 ]' 00:21:04.123 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:04.381 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.381 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:04.381 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:04.381 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:04.381 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.381 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.381 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.640 02:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:21:05.205 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.463 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:05.463 02:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.463 02:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.463 02:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.463 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:05.463 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:05.463 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:05.721 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:21:05.721 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:05.721 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:05.721 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:05.721 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:05.721 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:21:05.721 02:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.721 02:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.721 02:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.721 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.721 02:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.978 00:21:05.978 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:05.978 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.978 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:06.236 { 00:21:06.236 "cntlid": 53, 00:21:06.236 "qid": 0, 00:21:06.236 "state": "enabled", 00:21:06.236 "listen_address": { 00:21:06.236 "trtype": "RDMA", 00:21:06.236 "adrfam": "IPv4", 00:21:06.236 "traddr": "192.168.100.8", 00:21:06.236 "trsvcid": "4420" 00:21:06.236 }, 00:21:06.236 "peer_address": { 00:21:06.236 "trtype": "RDMA", 00:21:06.236 "adrfam": "IPv4", 00:21:06.236 "traddr": "192.168.100.8", 00:21:06.236 "trsvcid": "50310" 00:21:06.236 }, 00:21:06.236 "auth": { 00:21:06.236 "state": "completed", 00:21:06.236 "digest": "sha384", 00:21:06.236 "dhgroup": "null" 00:21:06.236 } 00:21:06.236 } 00:21:06.236 ]' 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.236 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.494 02:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:21:07.431 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.431 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:07.431 02:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.431 02:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.431 02:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.431 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:07.431 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:07.431 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:07.690 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:21:07.690 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:07.690 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.690 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:07.690 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:07.690 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:21:07.690 02:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.690 02:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.690 02:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.690 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.690 02:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.949 00:21:07.949 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:07.949 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:07.949 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.208 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.208 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.208 02:47:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.208 02:47:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.208 02:47:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.208 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:08.208 { 00:21:08.208 "cntlid": 55, 00:21:08.208 "qid": 0, 00:21:08.208 "state": "enabled", 00:21:08.208 "listen_address": { 00:21:08.208 "trtype": "RDMA", 00:21:08.208 "adrfam": "IPv4", 00:21:08.208 "traddr": "192.168.100.8", 00:21:08.208 "trsvcid": "4420" 00:21:08.208 }, 00:21:08.208 "peer_address": { 00:21:08.208 "trtype": "RDMA", 00:21:08.208 "adrfam": "IPv4", 00:21:08.208 "traddr": "192.168.100.8", 00:21:08.208 "trsvcid": "51616" 00:21:08.208 }, 00:21:08.208 "auth": { 00:21:08.208 "state": "completed", 00:21:08.208 "digest": "sha384", 00:21:08.208 "dhgroup": "null" 00:21:08.208 } 00:21:08.208 } 00:21:08.208 ]' 00:21:08.208 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:08.467 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.467 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:08.467 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:08.467 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:08.467 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.467 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.467 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.725 02:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:21:09.663 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.663 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:09.663 02:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.663 02:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.663 02:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.663 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.663 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:09.663 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.663 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:09.922 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:21:09.922 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:09.922 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.922 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:09.922 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:09.922 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:21:09.922 02:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.922 02:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.922 02:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.922 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:09.922 02:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:10.180 00:21:10.180 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:10.180 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:10.180 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.438 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.438 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.438 02:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.438 02:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.438 02:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.438 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:10.438 { 00:21:10.438 "cntlid": 57, 00:21:10.438 "qid": 0, 00:21:10.438 "state": "enabled", 00:21:10.438 "listen_address": { 00:21:10.438 "trtype": "RDMA", 00:21:10.438 "adrfam": "IPv4", 00:21:10.438 "traddr": "192.168.100.8", 00:21:10.438 "trsvcid": "4420" 00:21:10.438 }, 00:21:10.438 "peer_address": { 00:21:10.438 "trtype": "RDMA", 00:21:10.438 "adrfam": "IPv4", 00:21:10.438 "traddr": "192.168.100.8", 00:21:10.438 "trsvcid": "40448" 00:21:10.438 }, 00:21:10.438 "auth": { 00:21:10.438 "state": "completed", 00:21:10.438 "digest": "sha384", 00:21:10.438 "dhgroup": "ffdhe2048" 00:21:10.438 } 00:21:10.438 } 00:21:10.438 ]' 00:21:10.438 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:10.438 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.438 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:10.438 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:10.438 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:10.697 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.697 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.697 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.956 02:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:21:11.523 02:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.782 02:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:11.782 02:47:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.782 02:47:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.782 02:47:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.782 02:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:11.782 02:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:11.782 02:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:11.782 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:21:11.782 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:11.782 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:11.782 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:11.782 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:11.782 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:21:11.782 02:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.782 02:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.782 02:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.783 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:11.783 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:12.351 00:21:12.351 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:12.351 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:12.351 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.351 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.351 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.351 02:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.351 02:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.610 02:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.610 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:12.610 { 00:21:12.610 "cntlid": 59, 00:21:12.610 "qid": 0, 00:21:12.610 "state": "enabled", 00:21:12.610 "listen_address": { 00:21:12.610 "trtype": "RDMA", 00:21:12.610 "adrfam": "IPv4", 00:21:12.610 "traddr": "192.168.100.8", 00:21:12.610 "trsvcid": "4420" 00:21:12.610 }, 00:21:12.610 "peer_address": { 00:21:12.610 "trtype": "RDMA", 00:21:12.610 "adrfam": "IPv4", 00:21:12.610 "traddr": "192.168.100.8", 00:21:12.610 "trsvcid": "39894" 00:21:12.610 }, 00:21:12.610 "auth": { 00:21:12.610 "state": "completed", 00:21:12.610 "digest": "sha384", 00:21:12.610 "dhgroup": "ffdhe2048" 00:21:12.610 } 00:21:12.610 } 00:21:12.610 ]' 00:21:12.610 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:12.610 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.610 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:12.610 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.610 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:12.610 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.610 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.610 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.869 02:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:21:13.436 02:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.695 02:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:13.695 02:47:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.695 02:47:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.695 02:47:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.695 02:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:13.695 02:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:13.695 02:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:13.954 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:21:13.954 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:13.954 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.954 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:13.954 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:13.954 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:21:13.954 02:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.954 02:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.954 02:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.954 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:13.954 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:14.212 00:21:14.212 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:14.212 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:14.212 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:14.472 { 00:21:14.472 "cntlid": 61, 00:21:14.472 "qid": 0, 00:21:14.472 "state": "enabled", 00:21:14.472 "listen_address": { 00:21:14.472 "trtype": "RDMA", 00:21:14.472 "adrfam": "IPv4", 00:21:14.472 "traddr": "192.168.100.8", 00:21:14.472 "trsvcid": "4420" 00:21:14.472 }, 00:21:14.472 "peer_address": { 00:21:14.472 "trtype": "RDMA", 00:21:14.472 "adrfam": "IPv4", 00:21:14.472 "traddr": "192.168.100.8", 00:21:14.472 "trsvcid": "37019" 00:21:14.472 }, 00:21:14.472 "auth": { 00:21:14.472 "state": "completed", 00:21:14.472 "digest": "sha384", 00:21:14.472 "dhgroup": "ffdhe2048" 00:21:14.472 } 00:21:14.472 } 00:21:14.472 ]' 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.472 02:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.040 02:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:21:15.607 02:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.866 02:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:15.866 02:47:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.866 02:47:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.866 02:47:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.866 02:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:15.866 02:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:15.866 02:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:16.124 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:21:16.124 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:16.124 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:16.124 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:16.124 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:16.124 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:21:16.124 02:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.124 02:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.124 02:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.124 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:16.124 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:16.383 00:21:16.383 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:16.383 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:16.384 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:16.642 { 00:21:16.642 "cntlid": 63, 00:21:16.642 "qid": 0, 00:21:16.642 "state": "enabled", 00:21:16.642 "listen_address": { 00:21:16.642 "trtype": "RDMA", 00:21:16.642 "adrfam": "IPv4", 00:21:16.642 "traddr": "192.168.100.8", 00:21:16.642 "trsvcid": "4420" 00:21:16.642 }, 00:21:16.642 "peer_address": { 00:21:16.642 "trtype": "RDMA", 00:21:16.642 "adrfam": "IPv4", 00:21:16.642 "traddr": "192.168.100.8", 00:21:16.642 "trsvcid": "36823" 00:21:16.642 }, 00:21:16.642 "auth": { 00:21:16.642 "state": "completed", 00:21:16.642 "digest": "sha384", 00:21:16.642 "dhgroup": "ffdhe2048" 00:21:16.642 } 00:21:16.642 } 00:21:16.642 ]' 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.642 02:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.900 02:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:21:17.836 02:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.837 02:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:17.837 02:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.837 02:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.837 02:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.837 02:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.837 02:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:17.837 02:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.837 02:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:18.095 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:21:18.095 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:18.095 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.095 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:18.095 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:18.095 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:21:18.095 02:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.095 02:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.095 02:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.095 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:18.095 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:18.353 00:21:18.353 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:18.353 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:18.353 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.612 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.612 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.612 02:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.612 02:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.612 02:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.612 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:18.612 { 00:21:18.612 "cntlid": 65, 00:21:18.612 "qid": 0, 00:21:18.612 "state": "enabled", 00:21:18.612 "listen_address": { 00:21:18.612 "trtype": "RDMA", 00:21:18.612 "adrfam": "IPv4", 00:21:18.612 "traddr": "192.168.100.8", 00:21:18.612 "trsvcid": "4420" 00:21:18.612 }, 00:21:18.612 "peer_address": { 00:21:18.612 "trtype": "RDMA", 00:21:18.612 "adrfam": "IPv4", 00:21:18.612 "traddr": "192.168.100.8", 00:21:18.612 "trsvcid": "43210" 00:21:18.612 }, 00:21:18.612 "auth": { 00:21:18.612 "state": "completed", 00:21:18.612 "digest": "sha384", 00:21:18.612 "dhgroup": "ffdhe3072" 00:21:18.612 } 00:21:18.612 } 00:21:18.612 ]' 00:21:18.612 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:18.612 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.612 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:18.612 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.612 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:18.871 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.871 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.871 02:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.130 02:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:21:19.761 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.019 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:20.019 02:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.019 02:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.019 02:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.019 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:20.019 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:20.019 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:20.278 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:21:20.278 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:20.278 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.278 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:20.278 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:20.278 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:21:20.278 02:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.278 02:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.278 02:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.278 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:20.278 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:20.536 00:21:20.536 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:20.536 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:20.536 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.795 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.795 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.795 02:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.795 02:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.795 02:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.795 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:20.795 { 00:21:20.795 "cntlid": 67, 00:21:20.795 "qid": 0, 00:21:20.795 "state": "enabled", 00:21:20.795 "listen_address": { 00:21:20.795 "trtype": "RDMA", 00:21:20.795 "adrfam": "IPv4", 00:21:20.795 "traddr": "192.168.100.8", 00:21:20.795 "trsvcid": "4420" 00:21:20.795 }, 00:21:20.795 "peer_address": { 00:21:20.795 "trtype": "RDMA", 00:21:20.795 "adrfam": "IPv4", 00:21:20.795 "traddr": "192.168.100.8", 00:21:20.795 "trsvcid": "43893" 00:21:20.795 }, 00:21:20.795 "auth": { 00:21:20.795 "state": "completed", 00:21:20.795 "digest": "sha384", 00:21:20.795 "dhgroup": "ffdhe3072" 00:21:20.795 } 00:21:20.795 } 00:21:20.795 ]' 00:21:20.795 02:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:20.795 02:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.795 02:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:20.795 02:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.795 02:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:21.054 02:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.054 02:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.054 02:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.054 02:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:21:21.990 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.990 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:21.990 02:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.990 02:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.990 02:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.990 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:21.990 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:21.990 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:22.249 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:21:22.249 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:22.249 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:22.249 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:22.249 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:22.249 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:21:22.249 02:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.249 02:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.250 02:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.250 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:22.250 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:22.818 00:21:22.818 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:22.818 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:22.818 02:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:23.077 { 00:21:23.077 "cntlid": 69, 00:21:23.077 "qid": 0, 00:21:23.077 "state": "enabled", 00:21:23.077 "listen_address": { 00:21:23.077 "trtype": "RDMA", 00:21:23.077 "adrfam": "IPv4", 00:21:23.077 "traddr": "192.168.100.8", 00:21:23.077 "trsvcid": "4420" 00:21:23.077 }, 00:21:23.077 "peer_address": { 00:21:23.077 "trtype": "RDMA", 00:21:23.077 "adrfam": "IPv4", 00:21:23.077 "traddr": "192.168.100.8", 00:21:23.077 "trsvcid": "42903" 00:21:23.077 }, 00:21:23.077 "auth": { 00:21:23.077 "state": "completed", 00:21:23.077 "digest": "sha384", 00:21:23.077 "dhgroup": "ffdhe3072" 00:21:23.077 } 00:21:23.077 } 00:21:23.077 ]' 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.077 02:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.335 02:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:21:24.271 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.271 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:24.271 02:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.271 02:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.271 02:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.271 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:24.271 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:24.271 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:24.530 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:21:24.530 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:24.530 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:24.530 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:24.530 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:24.530 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:21:24.530 02:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.530 02:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.530 02:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.530 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.530 02:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.789 00:21:24.789 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:24.789 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:24.789 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.048 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.048 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.048 02:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:25.048 02:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.049 02:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:25.049 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:25.049 { 00:21:25.049 "cntlid": 71, 00:21:25.049 "qid": 0, 00:21:25.049 "state": "enabled", 00:21:25.049 "listen_address": { 00:21:25.049 "trtype": "RDMA", 00:21:25.049 "adrfam": "IPv4", 00:21:25.049 "traddr": "192.168.100.8", 00:21:25.049 "trsvcid": "4420" 00:21:25.049 }, 00:21:25.049 "peer_address": { 00:21:25.049 "trtype": "RDMA", 00:21:25.049 "adrfam": "IPv4", 00:21:25.049 "traddr": "192.168.100.8", 00:21:25.049 "trsvcid": "45184" 00:21:25.049 }, 00:21:25.049 "auth": { 00:21:25.049 "state": "completed", 00:21:25.049 "digest": "sha384", 00:21:25.049 "dhgroup": "ffdhe3072" 00:21:25.049 } 00:21:25.049 } 00:21:25.049 ]' 00:21:25.049 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:25.049 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.049 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:25.308 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:25.308 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:25.308 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.308 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.308 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.567 02:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:21:26.505 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.505 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:26.505 02:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.505 02:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.505 02:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.505 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.505 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:26.505 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:26.505 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:26.764 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:21:26.764 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:26.764 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:26.764 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:26.764 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:26.764 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:21:26.764 02:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.764 02:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.764 02:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.764 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:26.764 02:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:27.023 00:21:27.023 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:27.023 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:27.023 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.282 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.282 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.282 02:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.282 02:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.282 02:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:27.282 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:27.282 { 00:21:27.282 "cntlid": 73, 00:21:27.282 "qid": 0, 00:21:27.282 "state": "enabled", 00:21:27.282 "listen_address": { 00:21:27.282 "trtype": "RDMA", 00:21:27.282 "adrfam": "IPv4", 00:21:27.282 "traddr": "192.168.100.8", 00:21:27.282 "trsvcid": "4420" 00:21:27.282 }, 00:21:27.282 "peer_address": { 00:21:27.282 "trtype": "RDMA", 00:21:27.282 "adrfam": "IPv4", 00:21:27.282 "traddr": "192.168.100.8", 00:21:27.282 "trsvcid": "33042" 00:21:27.282 }, 00:21:27.282 "auth": { 00:21:27.282 "state": "completed", 00:21:27.282 "digest": "sha384", 00:21:27.282 "dhgroup": "ffdhe4096" 00:21:27.282 } 00:21:27.282 } 00:21:27.282 ]' 00:21:27.282 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:27.282 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.282 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:27.282 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.282 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:27.541 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.541 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.541 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.801 02:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:21:28.739 02:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.739 02:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:28.739 02:47:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.739 02:47:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.739 02:47:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.739 02:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:28.739 02:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.739 02:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.739 02:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:21:28.739 02:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:28.739 02:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.739 02:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:28.739 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:28.739 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:21:28.739 02:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.739 02:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.739 02:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.739 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:28.739 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:29.307 00:21:29.307 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:29.307 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.307 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:29.567 { 00:21:29.567 "cntlid": 75, 00:21:29.567 "qid": 0, 00:21:29.567 "state": "enabled", 00:21:29.567 "listen_address": { 00:21:29.567 "trtype": "RDMA", 00:21:29.567 "adrfam": "IPv4", 00:21:29.567 "traddr": "192.168.100.8", 00:21:29.567 "trsvcid": "4420" 00:21:29.567 }, 00:21:29.567 "peer_address": { 00:21:29.567 "trtype": "RDMA", 00:21:29.567 "adrfam": "IPv4", 00:21:29.567 "traddr": "192.168.100.8", 00:21:29.567 "trsvcid": "43591" 00:21:29.567 }, 00:21:29.567 "auth": { 00:21:29.567 "state": "completed", 00:21:29.567 "digest": "sha384", 00:21:29.567 "dhgroup": "ffdhe4096" 00:21:29.567 } 00:21:29.567 } 00:21:29.567 ]' 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.567 02:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.826 02:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:21:30.764 02:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.764 02:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:30.764 02:47:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.764 02:47:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.764 02:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.764 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:30.764 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:30.764 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:31.023 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:21:31.023 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:31.023 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:31.023 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:31.023 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:31.023 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:21:31.023 02:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:31.023 02:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.023 02:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:31.023 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:31.023 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:31.593 00:21:31.593 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:31.593 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.593 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:31.853 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.853 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.853 02:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:31.853 02:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.853 02:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:31.853 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:31.853 { 00:21:31.853 "cntlid": 77, 00:21:31.853 "qid": 0, 00:21:31.853 "state": "enabled", 00:21:31.853 "listen_address": { 00:21:31.853 "trtype": "RDMA", 00:21:31.853 "adrfam": "IPv4", 00:21:31.853 "traddr": "192.168.100.8", 00:21:31.853 "trsvcid": "4420" 00:21:31.853 }, 00:21:31.853 "peer_address": { 00:21:31.853 "trtype": "RDMA", 00:21:31.853 "adrfam": "IPv4", 00:21:31.853 "traddr": "192.168.100.8", 00:21:31.853 "trsvcid": "41910" 00:21:31.853 }, 00:21:31.853 "auth": { 00:21:31.853 "state": "completed", 00:21:31.853 "digest": "sha384", 00:21:31.853 "dhgroup": "ffdhe4096" 00:21:31.853 } 00:21:31.853 } 00:21:31.853 ]' 00:21:31.853 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:31.853 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.853 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:31.853 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.853 02:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:31.853 02:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.853 02:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.853 02:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.112 02:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:21:33.051 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.051 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:33.051 02:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.051 02:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.051 02:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.051 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:33.051 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:33.051 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:33.310 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:21:33.310 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:33.310 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.310 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:33.310 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.310 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:21:33.310 02:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.310 02:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.310 02:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.310 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.310 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.569 00:21:33.569 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:33.569 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:33.569 02:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.828 02:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.828 02:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.828 02:47:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.828 02:47:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.828 02:47:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.828 02:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:33.828 { 00:21:33.828 "cntlid": 79, 00:21:33.828 "qid": 0, 00:21:33.828 "state": "enabled", 00:21:33.828 "listen_address": { 00:21:33.828 "trtype": "RDMA", 00:21:33.828 "adrfam": "IPv4", 00:21:33.828 "traddr": "192.168.100.8", 00:21:33.828 "trsvcid": "4420" 00:21:33.828 }, 00:21:33.828 "peer_address": { 00:21:33.828 "trtype": "RDMA", 00:21:33.828 "adrfam": "IPv4", 00:21:33.828 "traddr": "192.168.100.8", 00:21:33.828 "trsvcid": "54618" 00:21:33.828 }, 00:21:33.828 "auth": { 00:21:33.828 "state": "completed", 00:21:33.828 "digest": "sha384", 00:21:33.828 "dhgroup": "ffdhe4096" 00:21:33.828 } 00:21:33.828 } 00:21:33.828 ]' 00:21:33.828 02:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:33.828 02:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.828 02:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:34.087 02:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.087 02:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:34.087 02:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.087 02:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.087 02:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.346 02:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:21:35.282 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.282 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:35.282 02:47:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.282 02:47:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.282 02:47:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.282 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.282 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:35.282 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:35.282 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:35.542 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:21:35.542 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:35.542 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:35.542 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:35.542 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:35.542 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:21:35.542 02:47:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.542 02:47:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.542 02:47:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.542 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:35.542 02:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:35.800 00:21:36.059 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:36.059 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:36.059 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:36.318 { 00:21:36.318 "cntlid": 81, 00:21:36.318 "qid": 0, 00:21:36.318 "state": "enabled", 00:21:36.318 "listen_address": { 00:21:36.318 "trtype": "RDMA", 00:21:36.318 "adrfam": "IPv4", 00:21:36.318 "traddr": "192.168.100.8", 00:21:36.318 "trsvcid": "4420" 00:21:36.318 }, 00:21:36.318 "peer_address": { 00:21:36.318 "trtype": "RDMA", 00:21:36.318 "adrfam": "IPv4", 00:21:36.318 "traddr": "192.168.100.8", 00:21:36.318 "trsvcid": "46795" 00:21:36.318 }, 00:21:36.318 "auth": { 00:21:36.318 "state": "completed", 00:21:36.318 "digest": "sha384", 00:21:36.318 "dhgroup": "ffdhe6144" 00:21:36.318 } 00:21:36.318 } 00:21:36.318 ]' 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.318 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.578 02:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:21:37.516 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.516 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:37.516 02:47:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.516 02:47:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.516 02:47:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.516 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:37.516 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:37.516 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:37.776 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:21:37.776 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:37.776 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:37.776 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:37.776 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:37.776 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:21:37.776 02:47:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.776 02:47:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.776 02:47:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.776 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:37.776 02:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:38.344 00:21:38.344 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:38.344 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.344 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:38.604 { 00:21:38.604 "cntlid": 83, 00:21:38.604 "qid": 0, 00:21:38.604 "state": "enabled", 00:21:38.604 "listen_address": { 00:21:38.604 "trtype": "RDMA", 00:21:38.604 "adrfam": "IPv4", 00:21:38.604 "traddr": "192.168.100.8", 00:21:38.604 "trsvcid": "4420" 00:21:38.604 }, 00:21:38.604 "peer_address": { 00:21:38.604 "trtype": "RDMA", 00:21:38.604 "adrfam": "IPv4", 00:21:38.604 "traddr": "192.168.100.8", 00:21:38.604 "trsvcid": "56445" 00:21:38.604 }, 00:21:38.604 "auth": { 00:21:38.604 "state": "completed", 00:21:38.604 "digest": "sha384", 00:21:38.604 "dhgroup": "ffdhe6144" 00:21:38.604 } 00:21:38.604 } 00:21:38.604 ]' 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.604 02:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.864 02:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:21:39.893 02:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.893 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:39.893 02:47:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:39.893 02:47:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.893 02:47:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:39.893 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:39.893 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:39.893 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:40.153 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:21:40.153 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:40.153 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:40.153 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:40.153 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:40.153 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:21:40.153 02:47:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.153 02:47:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.153 02:47:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.153 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:40.153 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:40.721 00:21:40.721 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:40.721 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:40.722 02:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:40.981 { 00:21:40.981 "cntlid": 85, 00:21:40.981 "qid": 0, 00:21:40.981 "state": "enabled", 00:21:40.981 "listen_address": { 00:21:40.981 "trtype": "RDMA", 00:21:40.981 "adrfam": "IPv4", 00:21:40.981 "traddr": "192.168.100.8", 00:21:40.981 "trsvcid": "4420" 00:21:40.981 }, 00:21:40.981 "peer_address": { 00:21:40.981 "trtype": "RDMA", 00:21:40.981 "adrfam": "IPv4", 00:21:40.981 "traddr": "192.168.100.8", 00:21:40.981 "trsvcid": "48678" 00:21:40.981 }, 00:21:40.981 "auth": { 00:21:40.981 "state": "completed", 00:21:40.981 "digest": "sha384", 00:21:40.981 "dhgroup": "ffdhe6144" 00:21:40.981 } 00:21:40.981 } 00:21:40.981 ]' 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.981 02:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.241 02:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:21:42.180 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.180 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:42.180 02:47:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.180 02:47:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.180 02:47:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:42.180 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:42.180 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:42.180 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:42.439 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:21:42.439 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:42.439 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:42.439 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:42.439 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.439 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:21:42.439 02:47:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.439 02:47:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.439 02:47:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:42.439 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.439 02:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.008 00:21:43.008 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:43.008 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:43.008 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:43.268 { 00:21:43.268 "cntlid": 87, 00:21:43.268 "qid": 0, 00:21:43.268 "state": "enabled", 00:21:43.268 "listen_address": { 00:21:43.268 "trtype": "RDMA", 00:21:43.268 "adrfam": "IPv4", 00:21:43.268 "traddr": "192.168.100.8", 00:21:43.268 "trsvcid": "4420" 00:21:43.268 }, 00:21:43.268 "peer_address": { 00:21:43.268 "trtype": "RDMA", 00:21:43.268 "adrfam": "IPv4", 00:21:43.268 "traddr": "192.168.100.8", 00:21:43.268 "trsvcid": "46731" 00:21:43.268 }, 00:21:43.268 "auth": { 00:21:43.268 "state": "completed", 00:21:43.268 "digest": "sha384", 00:21:43.268 "dhgroup": "ffdhe6144" 00:21:43.268 } 00:21:43.268 } 00:21:43.268 ]' 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.268 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.527 02:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:21:44.464 02:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.464 02:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:44.464 02:47:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:44.464 02:47:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.723 02:47:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:44.723 02:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.723 02:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:44.723 02:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:44.723 02:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:44.723 02:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:21:44.723 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:44.723 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:44.723 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:44.723 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:44.723 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:21:44.723 02:47:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:44.723 02:47:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.723 02:47:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:44.723 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:44.982 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:45.551 00:21:45.551 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:45.551 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:45.551 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.811 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.811 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.811 02:47:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:45.811 02:47:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.811 02:47:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:45.811 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:45.811 { 00:21:45.811 "cntlid": 89, 00:21:45.811 "qid": 0, 00:21:45.811 "state": "enabled", 00:21:45.811 "listen_address": { 00:21:45.811 "trtype": "RDMA", 00:21:45.811 "adrfam": "IPv4", 00:21:45.811 "traddr": "192.168.100.8", 00:21:45.811 "trsvcid": "4420" 00:21:45.811 }, 00:21:45.811 "peer_address": { 00:21:45.811 "trtype": "RDMA", 00:21:45.811 "adrfam": "IPv4", 00:21:45.811 "traddr": "192.168.100.8", 00:21:45.811 "trsvcid": "57115" 00:21:45.811 }, 00:21:45.811 "auth": { 00:21:45.811 "state": "completed", 00:21:45.811 "digest": "sha384", 00:21:45.811 "dhgroup": "ffdhe8192" 00:21:45.811 } 00:21:45.811 } 00:21:45.811 ]' 00:21:45.811 02:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:45.811 02:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.811 02:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:45.811 02:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:45.811 02:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:45.811 02:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.811 02:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.811 02:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.379 02:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:21:46.948 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.206 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:47.206 02:47:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:47.206 02:47:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.206 02:47:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:47.206 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:47.206 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:47.206 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:47.465 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:21:47.465 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:47.465 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:47.465 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:47.465 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:47.465 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:21:47.465 02:47:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:47.465 02:47:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.465 02:47:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:47.465 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:47.465 02:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:48.031 00:21:48.031 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:48.031 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:48.031 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.290 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.290 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.290 02:47:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:48.290 02:47:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.290 02:47:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:48.290 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:48.290 { 00:21:48.290 "cntlid": 91, 00:21:48.290 "qid": 0, 00:21:48.290 "state": "enabled", 00:21:48.290 "listen_address": { 00:21:48.290 "trtype": "RDMA", 00:21:48.290 "adrfam": "IPv4", 00:21:48.290 "traddr": "192.168.100.8", 00:21:48.290 "trsvcid": "4420" 00:21:48.290 }, 00:21:48.290 "peer_address": { 00:21:48.290 "trtype": "RDMA", 00:21:48.290 "adrfam": "IPv4", 00:21:48.290 "traddr": "192.168.100.8", 00:21:48.290 "trsvcid": "34230" 00:21:48.290 }, 00:21:48.290 "auth": { 00:21:48.290 "state": "completed", 00:21:48.290 "digest": "sha384", 00:21:48.290 "dhgroup": "ffdhe8192" 00:21:48.290 } 00:21:48.290 } 00:21:48.290 ]' 00:21:48.290 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:48.290 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.290 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:48.549 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:48.549 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:48.549 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.549 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.549 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.809 02:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:21:49.745 02:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.745 02:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:49.745 02:47:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:49.745 02:47:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.745 02:47:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:49.745 02:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:49.745 02:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:49.745 02:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:50.004 02:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:21:50.004 02:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:50.004 02:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:50.004 02:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:50.004 02:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:50.004 02:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:21:50.004 02:47:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.004 02:47:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.004 02:47:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.004 02:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:50.004 02:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:50.572 00:21:50.572 02:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:50.572 02:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:50.572 02:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.831 02:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.831 02:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.831 02:47:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.831 02:47:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.831 02:47:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.831 02:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:50.831 { 00:21:50.831 "cntlid": 93, 00:21:50.831 "qid": 0, 00:21:50.831 "state": "enabled", 00:21:50.831 "listen_address": { 00:21:50.831 "trtype": "RDMA", 00:21:50.831 "adrfam": "IPv4", 00:21:50.831 "traddr": "192.168.100.8", 00:21:50.831 "trsvcid": "4420" 00:21:50.831 }, 00:21:50.831 "peer_address": { 00:21:50.831 "trtype": "RDMA", 00:21:50.831 "adrfam": "IPv4", 00:21:50.831 "traddr": "192.168.100.8", 00:21:50.831 "trsvcid": "41991" 00:21:50.831 }, 00:21:50.831 "auth": { 00:21:50.831 "state": "completed", 00:21:50.831 "digest": "sha384", 00:21:50.831 "dhgroup": "ffdhe8192" 00:21:50.831 } 00:21:50.831 } 00:21:50.831 ]' 00:21:50.831 02:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:50.831 02:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.831 02:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:51.090 02:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:51.090 02:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:51.090 02:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.090 02:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.090 02:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.349 02:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:21:52.287 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.287 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:52.287 02:47:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:52.287 02:47:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.287 02:47:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:52.287 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:52.287 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:52.287 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:52.546 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:21:52.546 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:52.546 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:52.546 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:52.546 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:52.546 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:21:52.546 02:47:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:52.546 02:47:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.546 02:47:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:52.546 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.546 02:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.114 00:21:53.114 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:53.114 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.114 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:53.372 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.372 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.372 02:47:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.372 02:47:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.372 02:47:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.372 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:53.372 { 00:21:53.372 "cntlid": 95, 00:21:53.372 "qid": 0, 00:21:53.372 "state": "enabled", 00:21:53.372 "listen_address": { 00:21:53.372 "trtype": "RDMA", 00:21:53.372 "adrfam": "IPv4", 00:21:53.372 "traddr": "192.168.100.8", 00:21:53.372 "trsvcid": "4420" 00:21:53.372 }, 00:21:53.372 "peer_address": { 00:21:53.372 "trtype": "RDMA", 00:21:53.372 "adrfam": "IPv4", 00:21:53.372 "traddr": "192.168.100.8", 00:21:53.372 "trsvcid": "51916" 00:21:53.372 }, 00:21:53.372 "auth": { 00:21:53.372 "state": "completed", 00:21:53.372 "digest": "sha384", 00:21:53.372 "dhgroup": "ffdhe8192" 00:21:53.373 } 00:21:53.373 } 00:21:53.373 ]' 00:21:53.373 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:53.373 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:53.373 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:53.631 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.631 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:53.631 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.631 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.631 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.890 02:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:21:54.829 02:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.829 02:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:54.829 02:47:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.829 02:47:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.829 02:47:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.829 02:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:21:54.829 02:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.829 02:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:54.829 02:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:54.829 02:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:55.089 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:21:55.089 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:55.089 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:55.089 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:55.089 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:55.089 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:21:55.089 02:47:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.089 02:47:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.089 02:47:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.089 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:55.089 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:55.348 00:21:55.348 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:55.348 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:55.349 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:55.608 { 00:21:55.608 "cntlid": 97, 00:21:55.608 "qid": 0, 00:21:55.608 "state": "enabled", 00:21:55.608 "listen_address": { 00:21:55.608 "trtype": "RDMA", 00:21:55.608 "adrfam": "IPv4", 00:21:55.608 "traddr": "192.168.100.8", 00:21:55.608 "trsvcid": "4420" 00:21:55.608 }, 00:21:55.608 "peer_address": { 00:21:55.608 "trtype": "RDMA", 00:21:55.608 "adrfam": "IPv4", 00:21:55.608 "traddr": "192.168.100.8", 00:21:55.608 "trsvcid": "49905" 00:21:55.608 }, 00:21:55.608 "auth": { 00:21:55.608 "state": "completed", 00:21:55.608 "digest": "sha512", 00:21:55.608 "dhgroup": "null" 00:21:55.608 } 00:21:55.608 } 00:21:55.608 ]' 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.608 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.609 02:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.868 02:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:21:56.806 02:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.806 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:56.806 02:48:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.806 02:48:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:57.066 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:57.325 00:21:57.584 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:57.584 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:57.584 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.844 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.844 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.844 02:48:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:57.844 02:48:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.844 02:48:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:57.844 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:57.844 { 00:21:57.844 "cntlid": 99, 00:21:57.844 "qid": 0, 00:21:57.844 "state": "enabled", 00:21:57.844 "listen_address": { 00:21:57.844 "trtype": "RDMA", 00:21:57.844 "adrfam": "IPv4", 00:21:57.844 "traddr": "192.168.100.8", 00:21:57.844 "trsvcid": "4420" 00:21:57.844 }, 00:21:57.844 "peer_address": { 00:21:57.844 "trtype": "RDMA", 00:21:57.844 "adrfam": "IPv4", 00:21:57.844 "traddr": "192.168.100.8", 00:21:57.844 "trsvcid": "55721" 00:21:57.844 }, 00:21:57.844 "auth": { 00:21:57.844 "state": "completed", 00:21:57.844 "digest": "sha512", 00:21:57.844 "dhgroup": "null" 00:21:57.844 } 00:21:57.844 } 00:21:57.844 ]' 00:21:57.844 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:57.844 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.844 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:57.844 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:57.844 02:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:57.844 02:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.844 02:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.844 02:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.103 02:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:21:59.044 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.044 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:21:59.044 02:48:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:59.044 02:48:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.044 02:48:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:59.044 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:59.044 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:59.044 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:59.303 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:21:59.303 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:59.303 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.303 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:59.303 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:59.303 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:21:59.303 02:48:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:59.303 02:48:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.303 02:48:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:59.303 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:59.303 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:59.604 00:21:59.604 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:59.604 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:59.604 02:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.908 02:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.908 02:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.908 02:48:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:59.908 02:48:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.908 02:48:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:59.908 02:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:59.908 { 00:21:59.908 "cntlid": 101, 00:21:59.908 "qid": 0, 00:21:59.908 "state": "enabled", 00:21:59.908 "listen_address": { 00:21:59.908 "trtype": "RDMA", 00:21:59.908 "adrfam": "IPv4", 00:21:59.908 "traddr": "192.168.100.8", 00:21:59.908 "trsvcid": "4420" 00:21:59.908 }, 00:21:59.908 "peer_address": { 00:21:59.908 "trtype": "RDMA", 00:21:59.908 "adrfam": "IPv4", 00:21:59.908 "traddr": "192.168.100.8", 00:21:59.908 "trsvcid": "45437" 00:21:59.908 }, 00:21:59.908 "auth": { 00:21:59.908 "state": "completed", 00:21:59.908 "digest": "sha512", 00:21:59.908 "dhgroup": "null" 00:21:59.908 } 00:21:59.908 } 00:21:59.908 ]' 00:21:59.908 02:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:59.908 02:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.908 02:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:59.908 02:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:59.908 02:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:00.167 02:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.167 02:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.167 02:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.426 02:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:22:01.363 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.363 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:01.363 02:48:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:01.363 02:48:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.363 02:48:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:01.363 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:01.363 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:01.363 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:01.622 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:22:01.622 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:01.622 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.622 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:01.622 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:01.622 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:22:01.622 02:48:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:01.622 02:48:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.622 02:48:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:01.622 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.622 02:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.881 00:22:01.881 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:01.881 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:01.881 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:02.140 { 00:22:02.140 "cntlid": 103, 00:22:02.140 "qid": 0, 00:22:02.140 "state": "enabled", 00:22:02.140 "listen_address": { 00:22:02.140 "trtype": "RDMA", 00:22:02.140 "adrfam": "IPv4", 00:22:02.140 "traddr": "192.168.100.8", 00:22:02.140 "trsvcid": "4420" 00:22:02.140 }, 00:22:02.140 "peer_address": { 00:22:02.140 "trtype": "RDMA", 00:22:02.140 "adrfam": "IPv4", 00:22:02.140 "traddr": "192.168.100.8", 00:22:02.140 "trsvcid": "59619" 00:22:02.140 }, 00:22:02.140 "auth": { 00:22:02.140 "state": "completed", 00:22:02.140 "digest": "sha512", 00:22:02.140 "dhgroup": "null" 00:22:02.140 } 00:22:02.140 } 00:22:02.140 ]' 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.140 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.399 02:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:22:03.337 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.337 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:03.337 02:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.337 02:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.337 02:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.337 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.337 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:03.337 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:03.337 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:03.596 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:22:03.596 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:03.596 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.596 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:03.596 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:03.596 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:22:03.596 02:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.596 02:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.596 02:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.596 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:03.597 02:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:03.856 00:22:03.856 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:03.856 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:03.856 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.115 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.115 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.115 02:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:04.115 02:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.115 02:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.375 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:04.375 { 00:22:04.375 "cntlid": 105, 00:22:04.375 "qid": 0, 00:22:04.375 "state": "enabled", 00:22:04.375 "listen_address": { 00:22:04.375 "trtype": "RDMA", 00:22:04.375 "adrfam": "IPv4", 00:22:04.375 "traddr": "192.168.100.8", 00:22:04.375 "trsvcid": "4420" 00:22:04.375 }, 00:22:04.375 "peer_address": { 00:22:04.375 "trtype": "RDMA", 00:22:04.375 "adrfam": "IPv4", 00:22:04.375 "traddr": "192.168.100.8", 00:22:04.375 "trsvcid": "42949" 00:22:04.375 }, 00:22:04.375 "auth": { 00:22:04.375 "state": "completed", 00:22:04.375 "digest": "sha512", 00:22:04.375 "dhgroup": "ffdhe2048" 00:22:04.375 } 00:22:04.375 } 00:22:04.375 ]' 00:22:04.375 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:04.375 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.375 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:04.375 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:04.375 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:04.375 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.375 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.375 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.634 02:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:22:05.572 02:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.572 02:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:05.572 02:48:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:05.572 02:48:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.572 02:48:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:05.572 02:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:05.572 02:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.572 02:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.832 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:22:05.832 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:05.832 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.832 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:05.832 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:05.832 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:22:05.832 02:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:05.832 02:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.832 02:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:05.832 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:05.832 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:06.091 00:22:06.091 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:06.091 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:06.091 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.349 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.349 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.349 02:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.349 02:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.349 02:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.349 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:06.349 { 00:22:06.349 "cntlid": 107, 00:22:06.349 "qid": 0, 00:22:06.349 "state": "enabled", 00:22:06.349 "listen_address": { 00:22:06.349 "trtype": "RDMA", 00:22:06.349 "adrfam": "IPv4", 00:22:06.349 "traddr": "192.168.100.8", 00:22:06.349 "trsvcid": "4420" 00:22:06.349 }, 00:22:06.349 "peer_address": { 00:22:06.349 "trtype": "RDMA", 00:22:06.349 "adrfam": "IPv4", 00:22:06.349 "traddr": "192.168.100.8", 00:22:06.349 "trsvcid": "53221" 00:22:06.349 }, 00:22:06.349 "auth": { 00:22:06.349 "state": "completed", 00:22:06.349 "digest": "sha512", 00:22:06.349 "dhgroup": "ffdhe2048" 00:22:06.349 } 00:22:06.349 } 00:22:06.349 ]' 00:22:06.349 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:06.608 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.608 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:06.608 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:06.608 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:06.608 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.608 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.608 02:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.866 02:48:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:22:07.803 02:48:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.803 02:48:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:07.803 02:48:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:07.803 02:48:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.803 02:48:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:07.803 02:48:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:07.803 02:48:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:07.803 02:48:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:08.062 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:22:08.062 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:08.062 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.062 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:08.062 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:08.062 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:22:08.062 02:48:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:08.062 02:48:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.062 02:48:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:08.062 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:08.062 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:08.322 00:22:08.322 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:08.322 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:08.322 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.581 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.581 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.581 02:48:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:08.581 02:48:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.581 02:48:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:08.581 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:08.581 { 00:22:08.581 "cntlid": 109, 00:22:08.581 "qid": 0, 00:22:08.581 "state": "enabled", 00:22:08.581 "listen_address": { 00:22:08.581 "trtype": "RDMA", 00:22:08.581 "adrfam": "IPv4", 00:22:08.581 "traddr": "192.168.100.8", 00:22:08.581 "trsvcid": "4420" 00:22:08.581 }, 00:22:08.581 "peer_address": { 00:22:08.581 "trtype": "RDMA", 00:22:08.581 "adrfam": "IPv4", 00:22:08.581 "traddr": "192.168.100.8", 00:22:08.581 "trsvcid": "33371" 00:22:08.581 }, 00:22:08.581 "auth": { 00:22:08.581 "state": "completed", 00:22:08.581 "digest": "sha512", 00:22:08.581 "dhgroup": "ffdhe2048" 00:22:08.581 } 00:22:08.581 } 00:22:08.581 ]' 00:22:08.581 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:08.581 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.581 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:08.581 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:08.581 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:08.841 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.841 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.841 02:48:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.099 02:48:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:22:10.037 02:48:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.037 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:10.037 02:48:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:10.037 02:48:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.037 02:48:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:10.037 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:10.037 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:10.037 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:10.296 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:22:10.296 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:10.296 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.296 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:10.296 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:10.296 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:22:10.297 02:48:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:10.297 02:48:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.297 02:48:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:10.297 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.297 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.555 00:22:10.555 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:10.555 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:10.555 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.814 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.814 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.814 02:48:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:10.814 02:48:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.814 02:48:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:10.814 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:10.814 { 00:22:10.814 "cntlid": 111, 00:22:10.814 "qid": 0, 00:22:10.814 "state": "enabled", 00:22:10.814 "listen_address": { 00:22:10.814 "trtype": "RDMA", 00:22:10.814 "adrfam": "IPv4", 00:22:10.814 "traddr": "192.168.100.8", 00:22:10.814 "trsvcid": "4420" 00:22:10.814 }, 00:22:10.814 "peer_address": { 00:22:10.814 "trtype": "RDMA", 00:22:10.814 "adrfam": "IPv4", 00:22:10.814 "traddr": "192.168.100.8", 00:22:10.814 "trsvcid": "51361" 00:22:10.814 }, 00:22:10.814 "auth": { 00:22:10.814 "state": "completed", 00:22:10.814 "digest": "sha512", 00:22:10.814 "dhgroup": "ffdhe2048" 00:22:10.814 } 00:22:10.814 } 00:22:10.814 ]' 00:22:10.814 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:10.814 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.814 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:10.814 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:10.814 02:48:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:10.814 02:48:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.814 02:48:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.814 02:48:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.073 02:48:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:22:12.011 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.011 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:12.011 02:48:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.011 02:48:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.011 02:48:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.011 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:12.011 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:12.011 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.011 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.271 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:22:12.271 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:12.271 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.271 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:12.271 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:12.271 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:22:12.271 02:48:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.271 02:48:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.271 02:48:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.271 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:12.271 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:12.840 00:22:12.840 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:12.840 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:12.840 02:48:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.840 02:48:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.840 02:48:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.840 02:48:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.840 02:48:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.840 02:48:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.840 02:48:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:12.840 { 00:22:12.840 "cntlid": 113, 00:22:12.840 "qid": 0, 00:22:12.840 "state": "enabled", 00:22:12.840 "listen_address": { 00:22:12.840 "trtype": "RDMA", 00:22:12.840 "adrfam": "IPv4", 00:22:12.840 "traddr": "192.168.100.8", 00:22:12.840 "trsvcid": "4420" 00:22:12.840 }, 00:22:12.840 "peer_address": { 00:22:12.840 "trtype": "RDMA", 00:22:12.840 "adrfam": "IPv4", 00:22:12.840 "traddr": "192.168.100.8", 00:22:12.840 "trsvcid": "45410" 00:22:12.840 }, 00:22:12.840 "auth": { 00:22:12.840 "state": "completed", 00:22:12.840 "digest": "sha512", 00:22:12.840 "dhgroup": "ffdhe3072" 00:22:12.840 } 00:22:12.840 } 00:22:12.840 ]' 00:22:12.840 02:48:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:13.100 02:48:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.100 02:48:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:13.100 02:48:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:13.100 02:48:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:13.100 02:48:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.100 02:48:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.100 02:48:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.359 02:48:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:22:14.297 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.297 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:14.297 02:48:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.297 02:48:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.297 02:48:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.297 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:14.297 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:14.297 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:14.557 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:22:14.557 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:14.557 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.557 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:14.557 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:14.557 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:22:14.558 02:48:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.558 02:48:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.558 02:48:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.558 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:14.558 02:48:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:14.817 00:22:14.817 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:14.817 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:14.817 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.075 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.075 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.075 02:48:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.075 02:48:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.075 02:48:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.075 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:15.075 { 00:22:15.075 "cntlid": 115, 00:22:15.075 "qid": 0, 00:22:15.075 "state": "enabled", 00:22:15.075 "listen_address": { 00:22:15.075 "trtype": "RDMA", 00:22:15.075 "adrfam": "IPv4", 00:22:15.075 "traddr": "192.168.100.8", 00:22:15.075 "trsvcid": "4420" 00:22:15.075 }, 00:22:15.075 "peer_address": { 00:22:15.075 "trtype": "RDMA", 00:22:15.075 "adrfam": "IPv4", 00:22:15.075 "traddr": "192.168.100.8", 00:22:15.075 "trsvcid": "34894" 00:22:15.075 }, 00:22:15.075 "auth": { 00:22:15.075 "state": "completed", 00:22:15.075 "digest": "sha512", 00:22:15.075 "dhgroup": "ffdhe3072" 00:22:15.075 } 00:22:15.075 } 00:22:15.075 ]' 00:22:15.075 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:15.075 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.075 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:15.333 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:15.333 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:15.333 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.333 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.333 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.592 02:48:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:22:16.530 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.530 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:16.530 02:48:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.530 02:48:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.530 02:48:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.530 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:16.530 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:16.530 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:16.788 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:22:16.788 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:16.788 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.788 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:16.788 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:16.788 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:22:16.788 02:48:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.788 02:48:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.788 02:48:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.788 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:16.788 02:48:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:17.047 00:22:17.047 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:17.047 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.047 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:17.306 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.306 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.306 02:48:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:17.306 02:48:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.306 02:48:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:17.306 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:17.306 { 00:22:17.306 "cntlid": 117, 00:22:17.306 "qid": 0, 00:22:17.306 "state": "enabled", 00:22:17.306 "listen_address": { 00:22:17.306 "trtype": "RDMA", 00:22:17.306 "adrfam": "IPv4", 00:22:17.306 "traddr": "192.168.100.8", 00:22:17.306 "trsvcid": "4420" 00:22:17.306 }, 00:22:17.306 "peer_address": { 00:22:17.306 "trtype": "RDMA", 00:22:17.306 "adrfam": "IPv4", 00:22:17.306 "traddr": "192.168.100.8", 00:22:17.306 "trsvcid": "48256" 00:22:17.306 }, 00:22:17.306 "auth": { 00:22:17.306 "state": "completed", 00:22:17.306 "digest": "sha512", 00:22:17.306 "dhgroup": "ffdhe3072" 00:22:17.306 } 00:22:17.306 } 00:22:17.306 ]' 00:22:17.306 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:17.306 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.306 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:17.306 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:17.306 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:17.565 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.566 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.566 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.825 02:48:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:22:18.764 02:48:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.764 02:48:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:18.764 02:48:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:18.764 02:48:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.764 02:48:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:18.764 02:48:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:18.764 02:48:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:18.764 02:48:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:19.022 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:22:19.022 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:19.022 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:19.022 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:19.022 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:19.022 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:22:19.022 02:48:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.022 02:48:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.022 02:48:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.022 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.022 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.314 00:22:19.314 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:19.314 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:19.314 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:19.601 { 00:22:19.601 "cntlid": 119, 00:22:19.601 "qid": 0, 00:22:19.601 "state": "enabled", 00:22:19.601 "listen_address": { 00:22:19.601 "trtype": "RDMA", 00:22:19.601 "adrfam": "IPv4", 00:22:19.601 "traddr": "192.168.100.8", 00:22:19.601 "trsvcid": "4420" 00:22:19.601 }, 00:22:19.601 "peer_address": { 00:22:19.601 "trtype": "RDMA", 00:22:19.601 "adrfam": "IPv4", 00:22:19.601 "traddr": "192.168.100.8", 00:22:19.601 "trsvcid": "54680" 00:22:19.601 }, 00:22:19.601 "auth": { 00:22:19.601 "state": "completed", 00:22:19.601 "digest": "sha512", 00:22:19.601 "dhgroup": "ffdhe3072" 00:22:19.601 } 00:22:19.601 } 00:22:19.601 ]' 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.601 02:48:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.860 02:48:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:22:20.795 02:48:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.795 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:20.795 02:48:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:20.795 02:48:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.795 02:48:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:20.795 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.795 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:20.795 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:20.795 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:21.054 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:22:21.054 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:21.054 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:21.054 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:21.054 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:21.054 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:22:21.054 02:48:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.054 02:48:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.054 02:48:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:21.054 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:21.054 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:21.314 00:22:21.314 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:21.573 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:21.573 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.573 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.573 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.573 02:48:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.573 02:48:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.831 02:48:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:21.831 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:21.831 { 00:22:21.831 "cntlid": 121, 00:22:21.831 "qid": 0, 00:22:21.831 "state": "enabled", 00:22:21.831 "listen_address": { 00:22:21.831 "trtype": "RDMA", 00:22:21.831 "adrfam": "IPv4", 00:22:21.831 "traddr": "192.168.100.8", 00:22:21.831 "trsvcid": "4420" 00:22:21.831 }, 00:22:21.831 "peer_address": { 00:22:21.831 "trtype": "RDMA", 00:22:21.831 "adrfam": "IPv4", 00:22:21.831 "traddr": "192.168.100.8", 00:22:21.831 "trsvcid": "42464" 00:22:21.831 }, 00:22:21.831 "auth": { 00:22:21.831 "state": "completed", 00:22:21.831 "digest": "sha512", 00:22:21.831 "dhgroup": "ffdhe4096" 00:22:21.831 } 00:22:21.831 } 00:22:21.831 ]' 00:22:21.832 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:21.832 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.832 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:21.832 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:21.832 02:48:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:21.832 02:48:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.832 02:48:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.832 02:48:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.090 02:48:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:22:23.026 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.026 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:23.026 02:48:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:23.026 02:48:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.026 02:48:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:23.026 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:23.026 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:23.026 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:23.284 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:22:23.284 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:23.284 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:23.284 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:23.284 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:23.284 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:22:23.284 02:48:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:23.284 02:48:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.284 02:48:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:23.284 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:23.284 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:23.543 00:22:23.802 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:23.802 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:23.802 02:48:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:24.060 { 00:22:24.060 "cntlid": 123, 00:22:24.060 "qid": 0, 00:22:24.060 "state": "enabled", 00:22:24.060 "listen_address": { 00:22:24.060 "trtype": "RDMA", 00:22:24.060 "adrfam": "IPv4", 00:22:24.060 "traddr": "192.168.100.8", 00:22:24.060 "trsvcid": "4420" 00:22:24.060 }, 00:22:24.060 "peer_address": { 00:22:24.060 "trtype": "RDMA", 00:22:24.060 "adrfam": "IPv4", 00:22:24.060 "traddr": "192.168.100.8", 00:22:24.060 "trsvcid": "52259" 00:22:24.060 }, 00:22:24.060 "auth": { 00:22:24.060 "state": "completed", 00:22:24.060 "digest": "sha512", 00:22:24.060 "dhgroup": "ffdhe4096" 00:22:24.060 } 00:22:24.060 } 00:22:24.060 ]' 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.060 02:48:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.318 02:48:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:22:25.256 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.256 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:25.256 02:48:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:25.256 02:48:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.256 02:48:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:25.256 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:25.256 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:25.256 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:25.516 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:22:25.516 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:25.516 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:25.516 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:25.516 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:25.516 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:22:25.516 02:48:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:25.516 02:48:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.516 02:48:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:25.516 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:25.516 02:48:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:25.775 00:22:25.775 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:25.775 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.775 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:26.034 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.034 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.034 02:48:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:26.034 02:48:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.034 02:48:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:26.034 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:26.034 { 00:22:26.034 "cntlid": 125, 00:22:26.034 "qid": 0, 00:22:26.034 "state": "enabled", 00:22:26.034 "listen_address": { 00:22:26.034 "trtype": "RDMA", 00:22:26.034 "adrfam": "IPv4", 00:22:26.034 "traddr": "192.168.100.8", 00:22:26.034 "trsvcid": "4420" 00:22:26.034 }, 00:22:26.034 "peer_address": { 00:22:26.034 "trtype": "RDMA", 00:22:26.034 "adrfam": "IPv4", 00:22:26.034 "traddr": "192.168.100.8", 00:22:26.034 "trsvcid": "35132" 00:22:26.034 }, 00:22:26.034 "auth": { 00:22:26.034 "state": "completed", 00:22:26.034 "digest": "sha512", 00:22:26.034 "dhgroup": "ffdhe4096" 00:22:26.034 } 00:22:26.034 } 00:22:26.034 ]' 00:22:26.034 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:26.293 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.293 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:26.293 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:26.293 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:26.293 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.293 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.293 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.553 02:48:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:22:27.491 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.491 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:27.492 02:48:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.492 02:48:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.492 02:48:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.492 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:27.492 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:27.492 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:27.751 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:22:27.751 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:27.751 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.751 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:27.751 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:27.751 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:22:27.751 02:48:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.751 02:48:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.751 02:48:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.751 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.751 02:48:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.011 00:22:28.011 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:28.011 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.011 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:28.278 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.278 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.278 02:48:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.278 02:48:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.278 02:48:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:28.278 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:28.278 { 00:22:28.278 "cntlid": 127, 00:22:28.278 "qid": 0, 00:22:28.278 "state": "enabled", 00:22:28.278 "listen_address": { 00:22:28.278 "trtype": "RDMA", 00:22:28.278 "adrfam": "IPv4", 00:22:28.278 "traddr": "192.168.100.8", 00:22:28.278 "trsvcid": "4420" 00:22:28.278 }, 00:22:28.278 "peer_address": { 00:22:28.279 "trtype": "RDMA", 00:22:28.279 "adrfam": "IPv4", 00:22:28.279 "traddr": "192.168.100.8", 00:22:28.279 "trsvcid": "46104" 00:22:28.279 }, 00:22:28.279 "auth": { 00:22:28.279 "state": "completed", 00:22:28.279 "digest": "sha512", 00:22:28.279 "dhgroup": "ffdhe4096" 00:22:28.279 } 00:22:28.279 } 00:22:28.279 ]' 00:22:28.279 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:28.279 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.279 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:28.541 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:28.541 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:28.541 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.541 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.541 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.800 02:48:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:22:29.367 02:48:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.626 02:48:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:29.626 02:48:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:29.626 02:48:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.626 02:48:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:29.626 02:48:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:29.626 02:48:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:29.626 02:48:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:29.626 02:48:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:29.886 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:22:29.886 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:29.886 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:29.886 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:29.886 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:29.886 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:22:29.886 02:48:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:29.886 02:48:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.886 02:48:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:29.886 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:29.886 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:30.455 00:22:30.455 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:30.455 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:30.455 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.713 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.713 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.713 02:48:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.713 02:48:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.713 02:48:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.713 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:30.713 { 00:22:30.713 "cntlid": 129, 00:22:30.713 "qid": 0, 00:22:30.713 "state": "enabled", 00:22:30.713 "listen_address": { 00:22:30.713 "trtype": "RDMA", 00:22:30.713 "adrfam": "IPv4", 00:22:30.713 "traddr": "192.168.100.8", 00:22:30.713 "trsvcid": "4420" 00:22:30.713 }, 00:22:30.713 "peer_address": { 00:22:30.713 "trtype": "RDMA", 00:22:30.713 "adrfam": "IPv4", 00:22:30.713 "traddr": "192.168.100.8", 00:22:30.713 "trsvcid": "47541" 00:22:30.713 }, 00:22:30.713 "auth": { 00:22:30.713 "state": "completed", 00:22:30.713 "digest": "sha512", 00:22:30.713 "dhgroup": "ffdhe6144" 00:22:30.713 } 00:22:30.713 } 00:22:30.713 ]' 00:22:30.713 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:30.713 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.714 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:30.714 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:30.714 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:30.714 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.714 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.714 02:48:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.972 02:48:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:22:31.910 02:48:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.910 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:31.910 02:48:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.910 02:48:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.910 02:48:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.910 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:31.910 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:31.910 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:32.169 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:22:32.169 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:32.169 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:32.169 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:32.169 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:32.169 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:22:32.169 02:48:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:32.169 02:48:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.169 02:48:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:32.169 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:32.169 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:32.738 00:22:32.738 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:32.738 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:32.738 02:48:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:32.997 { 00:22:32.997 "cntlid": 131, 00:22:32.997 "qid": 0, 00:22:32.997 "state": "enabled", 00:22:32.997 "listen_address": { 00:22:32.997 "trtype": "RDMA", 00:22:32.997 "adrfam": "IPv4", 00:22:32.997 "traddr": "192.168.100.8", 00:22:32.997 "trsvcid": "4420" 00:22:32.997 }, 00:22:32.997 "peer_address": { 00:22:32.997 "trtype": "RDMA", 00:22:32.997 "adrfam": "IPv4", 00:22:32.997 "traddr": "192.168.100.8", 00:22:32.997 "trsvcid": "46972" 00:22:32.997 }, 00:22:32.997 "auth": { 00:22:32.997 "state": "completed", 00:22:32.997 "digest": "sha512", 00:22:32.997 "dhgroup": "ffdhe6144" 00:22:32.997 } 00:22:32.997 } 00:22:32.997 ]' 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.997 02:48:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.256 02:48:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:22:34.194 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.194 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:34.194 02:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:34.194 02:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.194 02:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:34.194 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:34.194 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:34.194 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:34.454 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:22:34.454 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:34.454 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:34.454 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:34.454 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:34.454 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:22:34.454 02:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:34.454 02:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.454 02:48:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:34.454 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:34.454 02:48:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:35.022 00:22:35.022 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:35.022 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:35.022 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.281 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.281 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.281 02:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:35.281 02:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.281 02:48:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:35.281 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:35.281 { 00:22:35.281 "cntlid": 133, 00:22:35.281 "qid": 0, 00:22:35.281 "state": "enabled", 00:22:35.281 "listen_address": { 00:22:35.281 "trtype": "RDMA", 00:22:35.281 "adrfam": "IPv4", 00:22:35.281 "traddr": "192.168.100.8", 00:22:35.281 "trsvcid": "4420" 00:22:35.281 }, 00:22:35.281 "peer_address": { 00:22:35.281 "trtype": "RDMA", 00:22:35.281 "adrfam": "IPv4", 00:22:35.281 "traddr": "192.168.100.8", 00:22:35.281 "trsvcid": "42061" 00:22:35.281 }, 00:22:35.281 "auth": { 00:22:35.281 "state": "completed", 00:22:35.281 "digest": "sha512", 00:22:35.281 "dhgroup": "ffdhe6144" 00:22:35.281 } 00:22:35.281 } 00:22:35.281 ]' 00:22:35.281 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:35.281 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.281 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:35.281 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:35.281 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:35.539 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.539 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.539 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.797 02:48:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:22:36.738 02:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.738 02:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:36.738 02:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.738 02:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.738 02:48:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:36.738 02:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:36.738 02:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:36.738 02:48:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:36.996 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:22:36.996 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:36.996 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:36.996 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:36.996 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:36.996 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:22:36.996 02:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.996 02:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.996 02:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:36.996 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:36.996 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.254 00:22:37.254 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:37.254 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:37.254 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.526 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.526 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.526 02:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:37.526 02:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.526 02:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.526 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:37.526 { 00:22:37.526 "cntlid": 135, 00:22:37.526 "qid": 0, 00:22:37.526 "state": "enabled", 00:22:37.526 "listen_address": { 00:22:37.526 "trtype": "RDMA", 00:22:37.526 "adrfam": "IPv4", 00:22:37.526 "traddr": "192.168.100.8", 00:22:37.526 "trsvcid": "4420" 00:22:37.526 }, 00:22:37.526 "peer_address": { 00:22:37.526 "trtype": "RDMA", 00:22:37.526 "adrfam": "IPv4", 00:22:37.526 "traddr": "192.168.100.8", 00:22:37.526 "trsvcid": "35865" 00:22:37.526 }, 00:22:37.526 "auth": { 00:22:37.526 "state": "completed", 00:22:37.526 "digest": "sha512", 00:22:37.526 "dhgroup": "ffdhe6144" 00:22:37.526 } 00:22:37.526 } 00:22:37.526 ]' 00:22:37.526 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:37.797 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.797 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:37.797 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:37.797 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:37.797 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.797 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.797 02:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.055 02:48:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:22:38.989 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.989 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:38.989 02:48:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.989 02:48:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.989 02:48:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.989 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:38.989 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:38.989 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:38.989 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.247 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:22:39.247 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:39.247 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:39.247 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:39.247 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:39.247 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:22:39.247 02:48:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.247 02:48:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.247 02:48:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.247 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:39.247 02:48:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:39.813 00:22:39.813 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:39.813 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.813 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:40.072 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.072 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.072 02:48:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.072 02:48:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.072 02:48:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.072 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:40.072 { 00:22:40.072 "cntlid": 137, 00:22:40.072 "qid": 0, 00:22:40.072 "state": "enabled", 00:22:40.072 "listen_address": { 00:22:40.072 "trtype": "RDMA", 00:22:40.072 "adrfam": "IPv4", 00:22:40.072 "traddr": "192.168.100.8", 00:22:40.072 "trsvcid": "4420" 00:22:40.072 }, 00:22:40.072 "peer_address": { 00:22:40.072 "trtype": "RDMA", 00:22:40.072 "adrfam": "IPv4", 00:22:40.072 "traddr": "192.168.100.8", 00:22:40.072 "trsvcid": "36149" 00:22:40.072 }, 00:22:40.072 "auth": { 00:22:40.072 "state": "completed", 00:22:40.072 "digest": "sha512", 00:22:40.072 "dhgroup": "ffdhe8192" 00:22:40.072 } 00:22:40.072 } 00:22:40.072 ]' 00:22:40.072 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:40.072 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.072 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:40.330 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:40.330 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:40.330 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.330 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.330 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.588 02:48:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:22:41.525 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.525 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:41.525 02:48:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.525 02:48:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.525 02:48:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.525 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:41.525 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:41.525 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:41.784 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:22:41.784 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:41.784 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:41.784 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:41.784 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:41.784 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:22:41.784 02:48:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.784 02:48:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.784 02:48:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.784 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:41.784 02:48:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:42.350 00:22:42.350 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:42.350 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:42.350 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.608 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.608 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.608 02:48:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.608 02:48:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.608 02:48:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.608 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:42.608 { 00:22:42.608 "cntlid": 139, 00:22:42.608 "qid": 0, 00:22:42.608 "state": "enabled", 00:22:42.608 "listen_address": { 00:22:42.608 "trtype": "RDMA", 00:22:42.608 "adrfam": "IPv4", 00:22:42.608 "traddr": "192.168.100.8", 00:22:42.608 "trsvcid": "4420" 00:22:42.608 }, 00:22:42.608 "peer_address": { 00:22:42.608 "trtype": "RDMA", 00:22:42.608 "adrfam": "IPv4", 00:22:42.608 "traddr": "192.168.100.8", 00:22:42.608 "trsvcid": "49119" 00:22:42.608 }, 00:22:42.608 "auth": { 00:22:42.608 "state": "completed", 00:22:42.608 "digest": "sha512", 00:22:42.608 "dhgroup": "ffdhe8192" 00:22:42.608 } 00:22:42.608 } 00:22:42.608 ]' 00:22:42.608 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:42.608 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.608 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:42.608 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:42.608 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:42.866 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.866 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.866 02:48:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.124 02:48:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:01:YjlhYWI1NmEwOTI3NjUyY2EyNjdmY2QzYTFlYjBlNmMNPc4o: 00:22:44.058 02:48:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.058 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:44.058 02:48:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.058 02:48:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.058 02:48:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.058 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:44.058 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:44.058 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:44.058 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:22:44.058 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:44.058 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:44.058 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:44.058 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:44.059 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key2 00:22:44.059 02:48:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.059 02:48:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.317 02:48:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.317 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:44.317 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:44.882 00:22:44.882 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:44.882 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:44.882 02:48:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.140 02:48:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.140 02:48:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.140 02:48:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.140 02:48:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.140 02:48:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.140 02:48:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:45.140 { 00:22:45.141 "cntlid": 141, 00:22:45.141 "qid": 0, 00:22:45.141 "state": "enabled", 00:22:45.141 "listen_address": { 00:22:45.141 "trtype": "RDMA", 00:22:45.141 "adrfam": "IPv4", 00:22:45.141 "traddr": "192.168.100.8", 00:22:45.141 "trsvcid": "4420" 00:22:45.141 }, 00:22:45.141 "peer_address": { 00:22:45.141 "trtype": "RDMA", 00:22:45.141 "adrfam": "IPv4", 00:22:45.141 "traddr": "192.168.100.8", 00:22:45.141 "trsvcid": "57239" 00:22:45.141 }, 00:22:45.141 "auth": { 00:22:45.141 "state": "completed", 00:22:45.141 "digest": "sha512", 00:22:45.141 "dhgroup": "ffdhe8192" 00:22:45.141 } 00:22:45.141 } 00:22:45.141 ]' 00:22:45.141 02:48:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:45.141 02:48:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.141 02:48:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:45.141 02:48:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:45.141 02:48:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:45.141 02:48:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.141 02:48:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.141 02:48:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.399 02:48:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:02:ODA3ODUwYjhlZWZmZjY5OTcyZDYzMTZjYzQ5NTM4ZDZmYmZkZWYzZjczOWJlMjdm6f7hnw==: 00:22:46.331 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.331 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:46.331 02:48:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:46.331 02:48:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.331 02:48:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:46.331 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:46.331 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.331 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.589 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:22:46.589 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:46.589 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:46.589 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:46.589 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:46.589 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key3 00:22:46.589 02:48:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:46.589 02:48:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.589 02:48:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:46.589 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:46.589 02:48:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.523 00:22:47.523 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:47.523 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:47.523 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.523 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.523 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.523 02:48:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.523 02:48:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.523 02:48:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.524 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:47.524 { 00:22:47.524 "cntlid": 143, 00:22:47.524 "qid": 0, 00:22:47.524 "state": "enabled", 00:22:47.524 "listen_address": { 00:22:47.524 "trtype": "RDMA", 00:22:47.524 "adrfam": "IPv4", 00:22:47.524 "traddr": "192.168.100.8", 00:22:47.524 "trsvcid": "4420" 00:22:47.524 }, 00:22:47.524 "peer_address": { 00:22:47.524 "trtype": "RDMA", 00:22:47.524 "adrfam": "IPv4", 00:22:47.524 "traddr": "192.168.100.8", 00:22:47.524 "trsvcid": "42956" 00:22:47.524 }, 00:22:47.524 "auth": { 00:22:47.524 "state": "completed", 00:22:47.524 "digest": "sha512", 00:22:47.524 "dhgroup": "ffdhe8192" 00:22:47.524 } 00:22:47.524 } 00:22:47.524 ]' 00:22:47.524 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:47.782 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.782 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:47.782 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:47.782 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:47.782 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.782 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.782 02:48:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.040 02:48:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:03:OWFiZmZhMmU5MTRjMWFiZmNhZDIzMWE0NmQ1MjAyZDFhNWI1Zjk3YTYwYjhhMDQ3YjU5OTQ1ZTNlYTAzMWU3OeEzFsw=: 00:22:48.974 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.974 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:48.974 02:48:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.974 02:48:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.974 02:48:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.974 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:22:48.974 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:22:48.974 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:22:48.974 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:48.974 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:48.974 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:49.232 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:22:49.233 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:49.233 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:49.233 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:49.233 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:49.233 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key0 00:22:49.233 02:48:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:49.233 02:48:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.233 02:48:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:49.233 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:49.233 02:48:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:49.798 00:22:49.798 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:49.798 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.798 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:50.056 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.056 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.056 02:48:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.056 02:48:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.056 02:48:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.056 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:50.056 { 00:22:50.056 "cntlid": 145, 00:22:50.056 "qid": 0, 00:22:50.056 "state": "enabled", 00:22:50.056 "listen_address": { 00:22:50.056 "trtype": "RDMA", 00:22:50.056 "adrfam": "IPv4", 00:22:50.056 "traddr": "192.168.100.8", 00:22:50.056 "trsvcid": "4420" 00:22:50.056 }, 00:22:50.056 "peer_address": { 00:22:50.056 "trtype": "RDMA", 00:22:50.056 "adrfam": "IPv4", 00:22:50.056 "traddr": "192.168.100.8", 00:22:50.056 "trsvcid": "33361" 00:22:50.056 }, 00:22:50.056 "auth": { 00:22:50.056 "state": "completed", 00:22:50.056 "digest": "sha512", 00:22:50.056 "dhgroup": "ffdhe8192" 00:22:50.056 } 00:22:50.056 } 00:22:50.056 ]' 00:22:50.056 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:50.314 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.314 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:50.314 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:50.314 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:50.314 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.314 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.314 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.571 02:48:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid 00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-secret DHHC-1:00:MDE5NTQxZTE3YmFkOGJhYjQxNDY2YTdiNWMxYTQ1ZTUwNWY4ZDJlMDIyZDU4ZDE2Sx+WCg==: 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --dhchap-key key1 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:51.511 02:48:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:23.658 request: 00:23:23.658 { 00:23:23.658 "name": "nvme0", 00:23:23.658 "trtype": "rdma", 00:23:23.658 "traddr": "192.168.100.8", 00:23:23.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e", 00:23:23.658 "adrfam": "ipv4", 00:23:23.658 "trsvcid": "4420", 00:23:23.658 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:23.658 "dhchap_key": "key2", 00:23:23.658 "method": "bdev_nvme_attach_controller", 00:23:23.658 "req_id": 1 00:23:23.658 } 00:23:23.658 Got JSON-RPC error response 00:23:23.658 response: 00:23:23.658 { 00:23:23.658 "code": -32602, 00:23:23.658 "message": "Invalid parameters" 00:23:23.658 } 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 836235 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 836235 ']' 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 836235 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 836235 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:23.658 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 836235' 00:23:23.658 killing process with pid 836235 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 836235 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 836235 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:23.659 rmmod nvme_rdma 00:23:23.659 rmmod nvme_fabrics 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 836053 ']' 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 836053 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 836053 ']' 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 836053 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 836053 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 836053' 00:23:23.659 killing process with pid 836053 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 836053 00:23:23.659 02:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 836053 00:23:23.659 02:49:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:23.659 02:49:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:23.659 02:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uFg /tmp/spdk.key-sha256.Sbp /tmp/spdk.key-sha384.odW /tmp/spdk.key-sha512.vTU /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:23:23.659 00:23:23.659 real 3m29.158s 00:23:23.659 user 7m50.997s 00:23:23.659 sys 0m25.275s 00:23:23.659 02:49:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:23.659 02:49:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.659 ************************************ 00:23:23.659 END TEST nvmf_auth_target 00:23:23.659 ************************************ 00:23:23.659 02:49:26 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:23:23.659 02:49:26 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:23.659 02:49:26 nvmf_rdma -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:23:23.659 02:49:26 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:23.659 02:49:26 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:23.659 02:49:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:23.659 ************************************ 00:23:23.659 START TEST nvmf_fuzz 00:23:23.659 ************************************ 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:23:23.659 * Looking for test storage... 00:23:23.659 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:23.659 02:49:26 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:30.229 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:30.230 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:30.230 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:30.230 Found net devices under 0000:18:00.0: mlx_0_0 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:30.230 Found net devices under 0000:18:00.1: mlx_0_1 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@420 -- # rdma_device_init 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@58 -- # uname 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:30.230 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:30.230 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:23:30.230 altname enp24s0f0np0 00:23:30.230 altname ens785f0np0 00:23:30.230 inet 192.168.100.8/24 scope global mlx_0_0 00:23:30.230 valid_lft forever preferred_lft forever 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:30.230 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:30.230 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:23:30.230 altname enp24s0f1np1 00:23:30.230 altname ens785f1np1 00:23:30.230 inet 192.168.100.9/24 scope global mlx_0_1 00:23:30.230 valid_lft forever preferred_lft forever 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:30.230 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:30.231 192.168.100.9' 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:30.231 192.168.100.9' 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # head -n 1 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:30.231 192.168.100.9' 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # tail -n +2 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # head -n 1 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=866466 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 866466 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@828 -- # '[' -z 866466 ']' 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:30.231 02:49:32 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@861 -- # return 0 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:30.231 Malloc0 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:23:30.231 02:49:33 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:24:02.318 Fuzzing completed. Shutting down the fuzz application 00:24:02.318 00:24:02.318 Dumping successful admin opcodes: 00:24:02.318 8, 9, 10, 24, 00:24:02.318 Dumping successful io opcodes: 00:24:02.318 0, 9, 00:24:02.318 NS: 0x200003af1f00 I/O qp, Total commands completed: 643702, total successful commands: 3758, random_seed: 2803008448 00:24:02.318 NS: 0x200003af1f00 admin qp, Total commands completed: 84000, total successful commands: 668, random_seed: 1741035904 00:24:02.318 02:50:03 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:02.318 Fuzzing completed. Shutting down the fuzz application 00:24:02.318 00:24:02.318 Dumping successful admin opcodes: 00:24:02.318 24, 00:24:02.318 Dumping successful io opcodes: 00:24:02.318 00:24:02.318 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1336921848 00:24:02.318 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1337029450 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:02.318 rmmod nvme_rdma 00:24:02.318 rmmod nvme_fabrics 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 866466 ']' 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 866466 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@947 -- # '[' -z 866466 ']' 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@951 -- # kill -0 866466 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@952 -- # uname 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 866466 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@965 -- # echo 'killing process with pid 866466' 00:24:02.318 killing process with pid 866466 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@966 -- # kill 866466 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@971 -- # wait 866466 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:02.318 00:24:02.318 real 0m39.208s 00:24:02.318 user 0m49.705s 00:24:02.318 sys 0m20.364s 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:02.318 02:50:05 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:02.318 ************************************ 00:24:02.318 END TEST nvmf_fuzz 00:24:02.318 ************************************ 00:24:02.318 02:50:05 nvmf_rdma -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:24:02.318 02:50:05 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:02.318 02:50:05 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:02.318 02:50:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:02.582 ************************************ 00:24:02.582 START TEST nvmf_multiconnection 00:24:02.582 ************************************ 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:24:02.582 * Looking for test storage... 00:24:02.582 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.582 02:50:05 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:24:09.152 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:24:09.152 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:24:09.152 Found net devices under 0000:18:00.0: mlx_0_0 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:24:09.152 Found net devices under 0000:18:00.1: mlx_0_1 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@420 -- # rdma_device_init 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@58 -- # uname 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:09.152 02:50:11 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:09.152 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:09.152 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:24:09.152 altname enp24s0f0np0 00:24:09.152 altname ens785f0np0 00:24:09.152 inet 192.168.100.8/24 scope global mlx_0_0 00:24:09.152 valid_lft forever preferred_lft forever 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:09.152 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:09.153 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:09.153 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:24:09.153 altname enp24s0f1np1 00:24:09.153 altname ens785f1np1 00:24:09.153 inet 192.168.100.9/24 scope global mlx_0_1 00:24:09.153 valid_lft forever preferred_lft forever 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:09.153 192.168.100.9' 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # head -n 1 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:09.153 192.168.100.9' 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:09.153 192.168.100.9' 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # tail -n +2 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # head -n 1 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=873505 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 873505 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@828 -- # '[' -z 873505 ']' 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:09.153 02:50:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 [2024-05-15 02:50:12.243876] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:24:09.153 [2024-05-15 02:50:12.243955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.153 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.153 [2024-05-15 02:50:12.351649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.153 [2024-05-15 02:50:12.401169] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.153 [2024-05-15 02:50:12.401221] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.153 [2024-05-15 02:50:12.401235] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.153 [2024-05-15 02:50:12.401248] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.153 [2024-05-15 02:50:12.401259] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.153 [2024-05-15 02:50:12.401370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.153 [2024-05-15 02:50:12.401456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.153 [2024-05-15 02:50:12.401558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.153 [2024-05-15 02:50:12.401558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@861 -- # return 0 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.090 [2024-05-15 02:50:13.128347] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2113d70/0x2118260) succeed. 00:24:10.090 [2024-05-15 02:50:13.143237] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21153b0/0x21598f0) succeed. 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.090 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.091 Malloc1 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.091 [2024-05-15 02:50:13.354145] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:10.091 [2024-05-15 02:50:13.354561] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.091 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.359 Malloc2 00:24:10.359 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.359 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:10.359 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.359 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.359 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.359 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:10.359 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 Malloc3 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 Malloc4 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 Malloc5 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 Malloc6 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.360 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 Malloc7 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 Malloc8 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 Malloc9 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 Malloc10 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 Malloc11 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.623 02:50:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:12.002 02:50:14 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:12.002 02:50:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:12.002 02:50:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:12.002 02:50:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:12.002 02:50:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:13.909 02:50:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:13.909 02:50:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:13.909 02:50:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK1 00:24:13.909 02:50:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:13.909 02:50:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:13.909 02:50:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:13.909 02:50:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:13.909 02:50:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:24:14.848 02:50:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:14.848 02:50:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:14.848 02:50:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:14.848 02:50:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:14.848 02:50:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:16.754 02:50:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:16.754 02:50:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:16.754 02:50:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK2 00:24:16.754 02:50:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:16.754 02:50:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:16.754 02:50:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:16.754 02:50:19 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:16.754 02:50:19 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:24:17.690 02:50:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:17.690 02:50:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:17.690 02:50:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:17.690 02:50:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:17.690 02:50:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:19.617 02:50:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:19.617 02:50:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:19.617 02:50:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK3 00:24:19.617 02:50:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:19.617 02:50:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:19.617 02:50:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:19.617 02:50:22 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.617 02:50:22 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:24:20.997 02:50:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:20.997 02:50:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:20.997 02:50:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:20.997 02:50:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:20.997 02:50:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:22.904 02:50:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:22.904 02:50:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK4 00:24:22.904 02:50:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:22.904 02:50:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:22.904 02:50:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:22.904 02:50:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:22.904 02:50:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.904 02:50:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:24:23.841 02:50:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:23.841 02:50:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:23.842 02:50:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:23.842 02:50:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:23.842 02:50:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:25.748 02:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:25.748 02:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:25.748 02:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK5 00:24:25.748 02:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:25.748 02:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:25.748 02:50:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:25.748 02:50:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.748 02:50:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:24:26.685 02:50:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:26.685 02:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:26.685 02:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:26.685 02:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:26.685 02:50:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:28.588 02:50:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:28.588 02:50:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:28.588 02:50:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK6 00:24:28.847 02:50:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:28.847 02:50:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:28.847 02:50:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:28.847 02:50:31 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:28.848 02:50:31 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:24:29.786 02:50:32 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:29.786 02:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:29.786 02:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.786 02:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:29.786 02:50:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:31.689 02:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:31.689 02:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:31.689 02:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK7 00:24:31.689 02:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:31.689 02:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:31.689 02:50:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:31.689 02:50:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.689 02:50:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:24:32.623 02:50:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:32.623 02:50:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:32.623 02:50:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:32.623 02:50:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:32.623 02:50:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:35.166 02:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:35.166 02:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:35.166 02:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK8 00:24:35.166 02:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:35.166 02:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:35.166 02:50:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:35.166 02:50:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.166 02:50:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:24:35.735 02:50:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:35.735 02:50:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:35.735 02:50:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:35.735 02:50:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:35.735 02:50:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:37.642 02:50:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:37.642 02:50:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:37.642 02:50:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK9 00:24:37.642 02:50:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:37.642 02:50:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:37.642 02:50:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:37.642 02:50:40 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.642 02:50:40 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:24:39.021 02:50:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:39.021 02:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:39.021 02:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:39.022 02:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:39.022 02:50:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:40.932 02:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:40.932 02:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:40.932 02:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK10 00:24:40.932 02:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:40.932 02:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:40.932 02:50:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:40.932 02:50:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.932 02:50:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:24:41.870 02:50:44 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:41.870 02:50:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:24:41.870 02:50:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:24:41.870 02:50:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:24:41.870 02:50:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:24:43.808 02:50:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:24:43.808 02:50:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:24:43.808 02:50:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK11 00:24:43.808 02:50:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:24:43.808 02:50:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:24:43.808 02:50:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:24:43.808 02:50:46 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:43.808 [global] 00:24:43.808 thread=1 00:24:43.808 invalidate=1 00:24:43.808 rw=read 00:24:43.808 time_based=1 00:24:43.808 runtime=10 00:24:43.808 ioengine=libaio 00:24:43.808 direct=1 00:24:43.808 bs=262144 00:24:43.808 iodepth=64 00:24:43.808 norandommap=1 00:24:43.808 numjobs=1 00:24:43.808 00:24:43.808 [job0] 00:24:43.808 filename=/dev/nvme0n1 00:24:43.808 [job1] 00:24:43.808 filename=/dev/nvme10n1 00:24:43.808 [job2] 00:24:43.808 filename=/dev/nvme1n1 00:24:43.808 [job3] 00:24:43.808 filename=/dev/nvme2n1 00:24:43.808 [job4] 00:24:43.808 filename=/dev/nvme3n1 00:24:43.808 [job5] 00:24:43.808 filename=/dev/nvme4n1 00:24:43.808 [job6] 00:24:43.808 filename=/dev/nvme5n1 00:24:43.808 [job7] 00:24:43.808 filename=/dev/nvme6n1 00:24:43.808 [job8] 00:24:43.808 filename=/dev/nvme7n1 00:24:43.808 [job9] 00:24:43.808 filename=/dev/nvme8n1 00:24:43.808 [job10] 00:24:43.808 filename=/dev/nvme9n1 00:24:44.067 Could not set queue depth (nvme0n1) 00:24:44.067 Could not set queue depth (nvme10n1) 00:24:44.067 Could not set queue depth (nvme1n1) 00:24:44.067 Could not set queue depth (nvme2n1) 00:24:44.067 Could not set queue depth (nvme3n1) 00:24:44.067 Could not set queue depth (nvme4n1) 00:24:44.067 Could not set queue depth (nvme5n1) 00:24:44.067 Could not set queue depth (nvme6n1) 00:24:44.067 Could not set queue depth (nvme7n1) 00:24:44.067 Could not set queue depth (nvme8n1) 00:24:44.067 Could not set queue depth (nvme9n1) 00:24:44.326 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.326 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.326 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.326 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.326 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.326 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.326 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.326 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.326 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.326 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.326 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.326 fio-3.35 00:24:44.326 Starting 11 threads 00:24:56.540 00:24:56.540 job0: (groupid=0, jobs=1): err= 0: pid=878511: Wed May 15 02:50:57 2024 00:24:56.540 read: IOPS=882, BW=221MiB/s (231MB/s)(2223MiB/10074msec) 00:24:56.540 slat (usec): min=11, max=114321, avg=822.53, stdev=3688.50 00:24:56.540 clat (usec): min=521, max=295350, avg=71603.31, stdev=39911.46 00:24:56.540 lat (usec): min=535, max=295395, avg=72425.84, stdev=40354.47 00:24:56.540 clat percentiles (msec): 00:24:56.540 | 1.00th=[ 3], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 37], 00:24:56.540 | 30.00th=[ 44], 40.00th=[ 55], 50.00th=[ 68], 60.00th=[ 80], 00:24:56.540 | 70.00th=[ 86], 80.00th=[ 104], 90.00th=[ 125], 95.00th=[ 144], 00:24:56.540 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 197], 99.95th=[ 197], 00:24:56.540 | 99.99th=[ 296] 00:24:56.540 bw ( KiB/s): min=83968, max=422400, per=7.44%, avg=225996.80, stdev=98753.28, samples=20 00:24:56.540 iops : min= 328, max= 1650, avg=882.90, stdev=385.66, samples=20 00:24:56.540 lat (usec) : 750=0.12%, 1000=0.02% 00:24:56.540 lat (msec) : 2=0.49%, 4=1.12%, 10=2.45%, 20=1.17%, 50=32.03% 00:24:56.540 lat (msec) : 100=41.33%, 250=21.21%, 500=0.04% 00:24:56.540 cpu : usr=0.35%, sys=3.90%, ctx=2717, majf=0, minf=4097 00:24:56.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:56.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.540 issued rwts: total=8892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.540 job1: (groupid=0, jobs=1): err= 0: pid=878513: Wed May 15 02:50:57 2024 00:24:56.540 read: IOPS=1204, BW=301MiB/s (316MB/s)(3033MiB/10073msec) 00:24:56.540 slat (usec): min=10, max=157304, avg=711.96, stdev=3414.76 00:24:56.540 clat (usec): min=977, max=314427, avg=52378.80, stdev=37622.02 00:24:56.540 lat (usec): min=1016, max=325668, avg=53090.76, stdev=38193.21 00:24:56.540 clat percentiles (msec): 00:24:56.540 | 1.00th=[ 6], 5.00th=[ 19], 10.00th=[ 20], 20.00th=[ 22], 00:24:56.540 | 30.00th=[ 24], 40.00th=[ 34], 50.00th=[ 41], 60.00th=[ 50], 00:24:56.540 | 70.00th=[ 66], 80.00th=[ 81], 90.00th=[ 99], 95.00th=[ 129], 00:24:56.540 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 197], 99.95th=[ 211], 00:24:56.540 | 99.99th=[ 313] 00:24:56.540 bw ( KiB/s): min=124152, max=698368, per=10.17%, avg=308902.00, stdev=177410.71, samples=20 00:24:56.540 iops : min= 484, max= 2728, avg=1206.60, stdev=693.06, samples=20 00:24:56.540 lat (usec) : 1000=0.01% 00:24:56.540 lat (msec) : 2=0.18%, 4=0.54%, 10=1.31%, 20=11.06%, 50=47.17% 00:24:56.540 lat (msec) : 100=30.05%, 250=9.64%, 500=0.04% 00:24:56.540 cpu : usr=0.45%, sys=4.79%, ctx=2976, majf=0, minf=4097 00:24:56.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:56.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.540 issued rwts: total=12130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.540 job2: (groupid=0, jobs=1): err= 0: pid=878517: Wed May 15 02:50:57 2024 00:24:56.540 read: IOPS=1266, BW=317MiB/s (332MB/s)(3187MiB/10069msec) 00:24:56.540 slat (usec): min=11, max=123793, avg=661.43, stdev=3918.07 00:24:56.540 clat (usec): min=1001, max=298976, avg=49831.02, stdev=43740.35 00:24:56.540 lat (usec): min=1052, max=306869, avg=50492.46, stdev=44461.71 00:24:56.540 clat percentiles (msec): 00:24:56.540 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 14], 20.00th=[ 17], 00:24:56.540 | 30.00th=[ 18], 40.00th=[ 27], 50.00th=[ 34], 60.00th=[ 37], 00:24:56.540 | 70.00th=[ 54], 80.00th=[ 94], 90.00th=[ 117], 95.00th=[ 136], 00:24:56.540 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 197], 99.95th=[ 203], 00:24:56.540 | 99.99th=[ 300] 00:24:56.540 bw ( KiB/s): min=86528, max=846848, per=10.69%, avg=324773.70, stdev=232798.94, samples=20 00:24:56.540 iops : min= 338, max= 3308, avg=1268.60, stdev=909.41, samples=20 00:24:56.540 lat (msec) : 2=0.46%, 4=1.61%, 10=5.58%, 20=27.43%, 50=34.41% 00:24:56.540 lat (msec) : 100=12.35%, 250=18.12%, 500=0.04% 00:24:56.540 cpu : usr=0.53%, sys=4.84%, ctx=3639, majf=0, minf=4097 00:24:56.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:56.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.540 issued rwts: total=12749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.540 job3: (groupid=0, jobs=1): err= 0: pid=878518: Wed May 15 02:50:57 2024 00:24:56.540 read: IOPS=1048, BW=262MiB/s (275MB/s)(2638MiB/10068msec) 00:24:56.540 slat (usec): min=16, max=152779, avg=820.12, stdev=4408.07 00:24:56.540 clat (usec): min=498, max=329003, avg=60184.52, stdev=42220.35 00:24:56.540 lat (usec): min=552, max=329049, avg=61004.64, stdev=42870.48 00:24:56.540 clat percentiles (usec): 00:24:56.540 | 1.00th=[ 1123], 5.00th=[ 3490], 10.00th=[ 12125], 20.00th=[ 20579], 00:24:56.540 | 30.00th=[ 33817], 40.00th=[ 41681], 50.00th=[ 49546], 60.00th=[ 63177], 00:24:56.540 | 70.00th=[ 82314], 80.00th=[ 98042], 90.00th=[121111], 95.00th=[131597], 00:24:56.540 | 99.00th=[181404], 99.50th=[185598], 99.90th=[196084], 99.95th=[254804], 00:24:56.540 | 99.99th=[329253] 00:24:56.540 bw ( KiB/s): min=121344, max=619008, per=8.84%, avg=268492.80, stdev=130006.45, samples=20 00:24:56.540 iops : min= 474, max= 2418, avg=1048.80, stdev=507.84, samples=20 00:24:56.540 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.43% 00:24:56.540 lat (msec) : 2=1.63%, 4=4.10%, 10=2.85%, 20=10.53%, 50=31.15% 00:24:56.540 lat (msec) : 100=30.69%, 250=18.52%, 500=0.06% 00:24:56.540 cpu : usr=0.36%, sys=4.44%, ctx=2843, majf=0, minf=4097 00:24:56.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:56.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.540 issued rwts: total=10552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.540 job4: (groupid=0, jobs=1): err= 0: pid=878519: Wed May 15 02:50:57 2024 00:24:56.540 read: IOPS=1034, BW=259MiB/s (271MB/s)(2606MiB/10071msec) 00:24:56.540 slat (usec): min=16, max=109468, avg=778.69, stdev=4119.04 00:24:56.540 clat (usec): min=553, max=246021, avg=61005.54, stdev=40489.99 00:24:56.540 lat (usec): min=613, max=246068, avg=61784.23, stdev=41231.65 00:24:56.540 clat percentiles (msec): 00:24:56.540 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 24], 00:24:56.540 | 30.00th=[ 35], 40.00th=[ 42], 50.00th=[ 48], 60.00th=[ 64], 00:24:56.540 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 120], 95.00th=[ 133], 00:24:56.540 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 203], 99.95th=[ 211], 00:24:56.540 | 99.99th=[ 230] 00:24:56.540 bw ( KiB/s): min=90112, max=646656, per=8.73%, avg=265239.25, stdev=149283.86, samples=20 00:24:56.540 iops : min= 352, max= 2526, avg=1036.05, stdev=583.08, samples=20 00:24:56.540 lat (usec) : 750=0.02%, 1000=0.01% 00:24:56.540 lat (msec) : 2=0.40%, 4=0.98%, 10=0.54%, 20=14.36%, 50=35.88% 00:24:56.540 lat (msec) : 100=29.63%, 250=18.18% 00:24:56.540 cpu : usr=0.40%, sys=4.44%, ctx=2953, majf=0, minf=4097 00:24:56.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:56.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.540 issued rwts: total=10422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.540 job5: (groupid=0, jobs=1): err= 0: pid=878520: Wed May 15 02:50:57 2024 00:24:56.540 read: IOPS=1474, BW=369MiB/s (387MB/s)(3712MiB/10070msec) 00:24:56.540 slat (usec): min=16, max=73360, avg=555.76, stdev=2242.46 00:24:56.540 clat (usec): min=265, max=163167, avg=42802.32, stdev=29812.86 00:24:56.540 lat (usec): min=306, max=163229, avg=43358.08, stdev=30278.32 00:24:56.540 clat percentiles (msec): 00:24:56.540 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 15], 20.00th=[ 21], 00:24:56.540 | 30.00th=[ 22], 40.00th=[ 25], 50.00th=[ 35], 60.00th=[ 41], 00:24:56.540 | 70.00th=[ 50], 80.00th=[ 74], 90.00th=[ 88], 95.00th=[ 103], 00:24:56.540 | 99.00th=[ 129], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 161], 00:24:56.540 | 99.99th=[ 163] 00:24:56.540 bw ( KiB/s): min=153088, max=858624, per=12.46%, avg=378496.00, stdev=224925.36, samples=20 00:24:56.540 iops : min= 598, max= 3354, avg=1478.50, stdev=878.61, samples=20 00:24:56.540 lat (usec) : 500=0.04%, 750=0.05%, 1000=0.05% 00:24:56.540 lat (msec) : 2=0.51%, 4=1.23%, 10=3.91%, 20=13.41%, 50=50.98% 00:24:56.540 lat (msec) : 100=23.57%, 250=6.24% 00:24:56.540 cpu : usr=0.45%, sys=6.15%, ctx=4562, majf=0, minf=4097 00:24:56.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:56.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.540 issued rwts: total=14848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.540 job6: (groupid=0, jobs=1): err= 0: pid=878521: Wed May 15 02:50:57 2024 00:24:56.540 read: IOPS=984, BW=246MiB/s (258MB/s)(2466MiB/10019msec) 00:24:56.541 slat (usec): min=16, max=93054, avg=944.58, stdev=3846.92 00:24:56.541 clat (msec): min=9, max=222, avg=64.01, stdev=42.84 00:24:56.541 lat (msec): min=10, max=279, avg=64.95, stdev=43.59 00:24:56.541 clat percentiles (msec): 00:24:56.541 | 1.00th=[ 18], 5.00th=[ 20], 10.00th=[ 22], 20.00th=[ 23], 00:24:56.541 | 30.00th=[ 30], 40.00th=[ 41], 50.00th=[ 54], 60.00th=[ 66], 00:24:56.541 | 70.00th=[ 83], 80.00th=[ 106], 90.00th=[ 124], 95.00th=[ 140], 00:24:56.541 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 199], 99.95th=[ 218], 00:24:56.541 | 99.99th=[ 224] 00:24:56.541 bw ( KiB/s): min=80384, max=720896, per=8.26%, avg=250880.00, stdev=173005.37, samples=20 00:24:56.541 iops : min= 314, max= 2816, avg=980.00, stdev=675.80, samples=20 00:24:56.541 lat (msec) : 10=0.01%, 20=6.13%, 50=42.64%, 100=29.23%, 250=21.98% 00:24:56.541 cpu : usr=0.36%, sys=4.27%, ctx=1959, majf=0, minf=3533 00:24:56.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:56.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.541 issued rwts: total=9863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.541 job7: (groupid=0, jobs=1): err= 0: pid=878522: Wed May 15 02:50:57 2024 00:24:56.541 read: IOPS=912, BW=228MiB/s (239MB/s)(2297MiB/10069msec) 00:24:56.541 slat (usec): min=16, max=67635, avg=715.02, stdev=3286.79 00:24:56.541 clat (usec): min=325, max=227960, avg=69338.22, stdev=37324.70 00:24:56.541 lat (usec): min=368, max=233734, avg=70053.24, stdev=37873.85 00:24:56.541 clat percentiles (msec): 00:24:56.541 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 28], 20.00th=[ 36], 00:24:56.541 | 30.00th=[ 47], 40.00th=[ 57], 50.00th=[ 69], 60.00th=[ 79], 00:24:56.541 | 70.00th=[ 86], 80.00th=[ 99], 90.00th=[ 112], 95.00th=[ 134], 00:24:56.541 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 197], 99.95th=[ 213], 00:24:56.541 | 99.99th=[ 228] 00:24:56.541 bw ( KiB/s): min=82944, max=421376, per=7.69%, avg=233639.15, stdev=82264.37, samples=20 00:24:56.541 iops : min= 324, max= 1646, avg=912.65, stdev=321.35, samples=20 00:24:56.541 lat (usec) : 500=0.03%, 750=0.02%, 1000=0.01% 00:24:56.541 lat (msec) : 2=0.45%, 4=0.62%, 10=2.21%, 20=3.06%, 50=28.47% 00:24:56.541 lat (msec) : 100=46.58%, 250=18.55% 00:24:56.541 cpu : usr=0.41%, sys=4.26%, ctx=3579, majf=0, minf=4097 00:24:56.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:56.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.541 issued rwts: total=9189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.541 job8: (groupid=0, jobs=1): err= 0: pid=878523: Wed May 15 02:50:57 2024 00:24:56.541 read: IOPS=1039, BW=260MiB/s (272MB/s)(2616MiB/10070msec) 00:24:56.541 slat (usec): min=16, max=126630, avg=778.04, stdev=3952.54 00:24:56.541 clat (usec): min=1068, max=279665, avg=60755.40, stdev=38992.76 00:24:56.541 lat (usec): min=1152, max=300341, avg=61533.44, stdev=39688.29 00:24:56.541 clat percentiles (msec): 00:24:56.541 | 1.00th=[ 12], 5.00th=[ 19], 10.00th=[ 20], 20.00th=[ 23], 00:24:56.541 | 30.00th=[ 32], 40.00th=[ 41], 50.00th=[ 50], 60.00th=[ 70], 00:24:56.541 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 127], 00:24:56.541 | 99.00th=[ 186], 99.50th=[ 188], 99.90th=[ 197], 99.95th=[ 197], 00:24:56.541 | 99.99th=[ 279] 00:24:56.541 bw ( KiB/s): min=95744, max=795136, per=8.77%, avg=266240.00, stdev=159741.15, samples=20 00:24:56.541 iops : min= 374, max= 3106, avg=1040.00, stdev=623.99, samples=20 00:24:56.541 lat (msec) : 2=0.03%, 4=0.23%, 10=0.54%, 20=10.99%, 50=39.17% 00:24:56.541 lat (msec) : 100=32.69%, 250=16.33%, 500=0.02% 00:24:56.541 cpu : usr=0.46%, sys=4.38%, ctx=3055, majf=0, minf=4097 00:24:56.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:56.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.541 issued rwts: total=10463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.541 job9: (groupid=0, jobs=1): err= 0: pid=878524: Wed May 15 02:50:57 2024 00:24:56.541 read: IOPS=907, BW=227MiB/s (238MB/s)(2286MiB/10073msec) 00:24:56.541 slat (usec): min=16, max=68160, avg=899.47, stdev=4186.78 00:24:56.541 clat (usec): min=555, max=246793, avg=69528.12, stdev=36582.39 00:24:56.541 lat (usec): min=602, max=246862, avg=70427.60, stdev=37276.43 00:24:56.541 clat percentiles (msec): 00:24:56.541 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 40], 00:24:56.541 | 30.00th=[ 45], 40.00th=[ 51], 50.00th=[ 62], 60.00th=[ 75], 00:24:56.541 | 70.00th=[ 85], 80.00th=[ 99], 90.00th=[ 112], 95.00th=[ 140], 00:24:56.541 | 99.00th=[ 190], 99.50th=[ 192], 99.90th=[ 201], 99.95th=[ 207], 00:24:56.541 | 99.99th=[ 247] 00:24:56.541 bw ( KiB/s): min=91648, max=403968, per=7.66%, avg=232473.60, stdev=88778.57, samples=20 00:24:56.541 iops : min= 358, max= 1578, avg=908.10, stdev=346.79, samples=20 00:24:56.541 lat (usec) : 750=0.17%, 1000=0.02% 00:24:56.541 lat (msec) : 2=0.11%, 4=0.24%, 10=0.86%, 20=1.45%, 50=36.10% 00:24:56.541 lat (msec) : 100=42.62%, 250=18.42% 00:24:56.541 cpu : usr=0.37%, sys=4.09%, ctx=2630, majf=0, minf=4097 00:24:56.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:56.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.541 issued rwts: total=9144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.541 job10: (groupid=0, jobs=1): err= 0: pid=878525: Wed May 15 02:50:57 2024 00:24:56.541 read: IOPS=1117, BW=279MiB/s (293MB/s)(2813MiB/10071msec) 00:24:56.541 slat (usec): min=13, max=123785, avg=738.16, stdev=2734.24 00:24:56.541 clat (usec): min=1055, max=194483, avg=56491.88, stdev=31025.42 00:24:56.541 lat (usec): min=1101, max=202869, avg=57230.04, stdev=31454.36 00:24:56.541 clat percentiles (msec): 00:24:56.541 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 22], 20.00th=[ 29], 00:24:56.541 | 30.00th=[ 37], 40.00th=[ 43], 50.00th=[ 54], 60.00th=[ 65], 00:24:56.541 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 120], 00:24:56.541 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 188], 99.95th=[ 194], 00:24:56.541 | 99.99th=[ 194] 00:24:56.541 bw ( KiB/s): min=132608, max=557056, per=9.43%, avg=286412.80, stdev=120827.02, samples=20 00:24:56.541 iops : min= 518, max= 2176, avg=1118.80, stdev=471.98, samples=20 00:24:56.541 lat (msec) : 2=0.28%, 4=0.87%, 10=2.23%, 20=4.21%, 50=40.69% 00:24:56.541 lat (msec) : 100=44.51%, 250=7.20% 00:24:56.541 cpu : usr=0.46%, sys=4.89%, ctx=3108, majf=0, minf=4097 00:24:56.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:24:56.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:56.541 issued rwts: total=11251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:56.541 00:24:56.541 Run status group 0 (all jobs): 00:24:56.541 READ: bw=2966MiB/s (3110MB/s), 221MiB/s-369MiB/s (231MB/s-387MB/s), io=29.2GiB (31.3GB), run=10019-10074msec 00:24:56.541 00:24:56.541 Disk stats (read/write): 00:24:56.541 nvme0n1: ios=17446/0, merge=0/0, ticks=1227358/0, in_queue=1227358, util=96.46% 00:24:56.541 nvme10n1: ios=23884/0, merge=0/0, ticks=1218262/0, in_queue=1218262, util=96.72% 00:24:56.541 nvme1n1: ios=25176/0, merge=0/0, ticks=1221338/0, in_queue=1221338, util=97.11% 00:24:56.541 nvme2n1: ios=20650/0, merge=0/0, ticks=1219552/0, in_queue=1219552, util=97.31% 00:24:56.541 nvme3n1: ios=20521/0, merge=0/0, ticks=1223474/0, in_queue=1223474, util=97.42% 00:24:56.541 nvme4n1: ios=29358/0, merge=0/0, ticks=1217389/0, in_queue=1217389, util=97.89% 00:24:56.541 nvme5n1: ios=18704/0, merge=0/0, ticks=1222796/0, in_queue=1222796, util=98.13% 00:24:56.541 nvme6n1: ios=17820/0, merge=0/0, ticks=1224806/0, in_queue=1224806, util=98.28% 00:24:56.541 nvme7n1: ios=20652/0, merge=0/0, ticks=1225530/0, in_queue=1225530, util=98.84% 00:24:56.541 nvme8n1: ios=17981/0, merge=0/0, ticks=1221228/0, in_queue=1221228, util=99.10% 00:24:56.541 nvme9n1: ios=22141/0, merge=0/0, ticks=1220989/0, in_queue=1220989, util=99.26% 00:24:56.541 02:50:57 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:56.541 [global] 00:24:56.541 thread=1 00:24:56.541 invalidate=1 00:24:56.541 rw=randwrite 00:24:56.541 time_based=1 00:24:56.541 runtime=10 00:24:56.541 ioengine=libaio 00:24:56.541 direct=1 00:24:56.541 bs=262144 00:24:56.541 iodepth=64 00:24:56.541 norandommap=1 00:24:56.541 numjobs=1 00:24:56.541 00:24:56.541 [job0] 00:24:56.541 filename=/dev/nvme0n1 00:24:56.541 [job1] 00:24:56.541 filename=/dev/nvme10n1 00:24:56.541 [job2] 00:24:56.541 filename=/dev/nvme1n1 00:24:56.541 [job3] 00:24:56.541 filename=/dev/nvme2n1 00:24:56.541 [job4] 00:24:56.541 filename=/dev/nvme3n1 00:24:56.541 [job5] 00:24:56.541 filename=/dev/nvme4n1 00:24:56.541 [job6] 00:24:56.541 filename=/dev/nvme5n1 00:24:56.541 [job7] 00:24:56.541 filename=/dev/nvme6n1 00:24:56.541 [job8] 00:24:56.541 filename=/dev/nvme7n1 00:24:56.541 [job9] 00:24:56.541 filename=/dev/nvme8n1 00:24:56.541 [job10] 00:24:56.541 filename=/dev/nvme9n1 00:24:56.541 Could not set queue depth (nvme0n1) 00:24:56.541 Could not set queue depth (nvme10n1) 00:24:56.541 Could not set queue depth (nvme1n1) 00:24:56.541 Could not set queue depth (nvme2n1) 00:24:56.541 Could not set queue depth (nvme3n1) 00:24:56.541 Could not set queue depth (nvme4n1) 00:24:56.541 Could not set queue depth (nvme5n1) 00:24:56.541 Could not set queue depth (nvme6n1) 00:24:56.541 Could not set queue depth (nvme7n1) 00:24:56.541 Could not set queue depth (nvme8n1) 00:24:56.541 Could not set queue depth (nvme9n1) 00:24:56.541 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.541 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.541 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.542 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.542 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.542 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.542 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.542 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.542 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.542 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.542 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:56.542 fio-3.35 00:24:56.542 Starting 11 threads 00:25:06.524 00:25:06.524 job0: (groupid=0, jobs=1): err= 0: pid=879798: Wed May 15 02:51:09 2024 00:25:06.524 write: IOPS=707, BW=177MiB/s (186MB/s)(1777MiB/10041msec); 0 zone resets 00:25:06.524 slat (usec): min=26, max=76647, avg=1164.94, stdev=3094.44 00:25:06.524 clat (usec): min=226, max=207144, avg=89232.56, stdev=34014.31 00:25:06.524 lat (usec): min=271, max=207207, avg=90397.50, stdev=34480.44 00:25:06.524 clat percentiles (usec): 00:25:06.524 | 1.00th=[ 1057], 5.00th=[ 5735], 10.00th=[ 33424], 20.00th=[ 69731], 00:25:06.524 | 30.00th=[ 85459], 40.00th=[ 90702], 50.00th=[ 93848], 60.00th=[101188], 00:25:06.524 | 70.00th=[107480], 80.00th=[113771], 90.00th=[125305], 95.00th=[132645], 00:25:06.524 | 99.00th=[147850], 99.50th=[149947], 99.90th=[196084], 99.95th=[200279], 00:25:06.524 | 99.99th=[206570] 00:25:06.524 bw ( KiB/s): min=128512, max=334848, per=7.57%, avg=180300.80, stdev=46048.33, samples=20 00:25:06.524 iops : min= 502, max= 1308, avg=704.30, stdev=179.88, samples=20 00:25:06.524 lat (usec) : 250=0.06%, 500=0.24%, 750=0.11%, 1000=0.44% 00:25:06.524 lat (msec) : 2=1.75%, 4=1.65%, 10=2.96%, 20=1.51%, 50=2.58% 00:25:06.524 lat (msec) : 100=47.45%, 250=41.27% 00:25:06.524 cpu : usr=1.81%, sys=2.92%, ctx=2300, majf=0, minf=1 00:25:06.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:06.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.524 issued rwts: total=0,7106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.524 job1: (groupid=0, jobs=1): err= 0: pid=879810: Wed May 15 02:51:09 2024 00:25:06.524 write: IOPS=666, BW=167MiB/s (175MB/s)(1677MiB/10073msec); 0 zone resets 00:25:06.524 slat (usec): min=25, max=55318, avg=1212.88, stdev=3056.47 00:25:06.524 clat (msec): min=10, max=186, avg=94.84, stdev=28.07 00:25:06.524 lat (msec): min=10, max=186, avg=96.05, stdev=28.49 00:25:06.524 clat percentiles (msec): 00:25:06.524 | 1.00th=[ 19], 5.00th=[ 33], 10.00th=[ 56], 20.00th=[ 77], 00:25:06.524 | 30.00th=[ 88], 40.00th=[ 92], 50.00th=[ 97], 60.00th=[ 105], 00:25:06.524 | 70.00th=[ 110], 80.00th=[ 116], 90.00th=[ 128], 95.00th=[ 136], 00:25:06.524 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 174], 99.95th=[ 176], 00:25:06.524 | 99.99th=[ 188] 00:25:06.524 bw ( KiB/s): min=129536, max=223232, per=7.14%, avg=170137.60, stdev=30813.77, samples=20 00:25:06.524 iops : min= 506, max= 872, avg=664.60, stdev=120.37, samples=20 00:25:06.524 lat (msec) : 20=1.27%, 50=7.53%, 100=44.64%, 250=46.56% 00:25:06.524 cpu : usr=1.80%, sys=2.71%, ctx=2104, majf=0, minf=1 00:25:06.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:06.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.524 issued rwts: total=0,6709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.524 job2: (groupid=0, jobs=1): err= 0: pid=879811: Wed May 15 02:51:09 2024 00:25:06.524 write: IOPS=760, BW=190MiB/s (199MB/s)(1909MiB/10043msec); 0 zone resets 00:25:06.524 slat (usec): min=27, max=65140, avg=1235.71, stdev=2871.57 00:25:06.524 clat (msec): min=7, max=187, avg=82.92, stdev=31.36 00:25:06.524 lat (msec): min=7, max=187, avg=84.15, stdev=31.87 00:25:06.524 clat percentiles (msec): 00:25:06.524 | 1.00th=[ 22], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 48], 00:25:06.524 | 30.00th=[ 70], 40.00th=[ 87], 50.00th=[ 92], 60.00th=[ 95], 00:25:06.524 | 70.00th=[ 103], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 127], 00:25:06.524 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 188], 00:25:06.524 | 99.99th=[ 188] 00:25:06.524 bw ( KiB/s): min=129024, max=482304, per=8.14%, avg=193843.20, stdev=74929.60, samples=20 00:25:06.524 iops : min= 504, max= 1884, avg=757.20, stdev=292.69, samples=20 00:25:06.524 lat (msec) : 10=0.05%, 20=0.24%, 50=21.14%, 100=46.43%, 250=32.14% 00:25:06.524 cpu : usr=1.81%, sys=3.02%, ctx=2038, majf=0, minf=1 00:25:06.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:06.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.524 issued rwts: total=0,7635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.524 job3: (groupid=0, jobs=1): err= 0: pid=879812: Wed May 15 02:51:09 2024 00:25:06.524 write: IOPS=637, BW=159MiB/s (167MB/s)(1601MiB/10043msec); 0 zone resets 00:25:06.524 slat (usec): min=28, max=46301, avg=1261.58, stdev=2962.91 00:25:06.524 clat (msec): min=9, max=162, avg=99.09, stdev=29.32 00:25:06.524 lat (msec): min=10, max=162, avg=100.35, stdev=29.82 00:25:06.524 clat percentiles (msec): 00:25:06.524 | 1.00th=[ 26], 5.00th=[ 40], 10.00th=[ 54], 20.00th=[ 74], 00:25:06.524 | 30.00th=[ 90], 40.00th=[ 99], 50.00th=[ 107], 60.00th=[ 113], 00:25:06.524 | 70.00th=[ 118], 80.00th=[ 125], 90.00th=[ 131], 95.00th=[ 136], 00:25:06.524 | 99.00th=[ 146], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 161], 00:25:06.524 | 99.99th=[ 163] 00:25:06.524 bw ( KiB/s): min=123904, max=326144, per=6.81%, avg=162304.00, stdev=49017.08, samples=20 00:25:06.524 iops : min= 484, max= 1274, avg=634.00, stdev=191.47, samples=20 00:25:06.524 lat (msec) : 10=0.02%, 20=0.50%, 50=8.15%, 100=33.25%, 250=58.08% 00:25:06.524 cpu : usr=1.89%, sys=2.82%, ctx=2129, majf=0, minf=1 00:25:06.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:06.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.524 issued rwts: total=0,6403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.524 job4: (groupid=0, jobs=1): err= 0: pid=879813: Wed May 15 02:51:09 2024 00:25:06.524 write: IOPS=1336, BW=334MiB/s (350MB/s)(3348MiB/10020msec); 0 zone resets 00:25:06.524 slat (usec): min=25, max=37485, avg=706.14, stdev=1821.78 00:25:06.524 clat (usec): min=687, max=138567, avg=47164.23, stdev=33279.01 00:25:06.525 lat (usec): min=749, max=139160, avg=47870.38, stdev=33785.23 00:25:06.525 clat percentiles (msec): 00:25:06.525 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 22], 20.00th=[ 23], 00:25:06.525 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 26], 60.00th=[ 38], 00:25:06.525 | 70.00th=[ 61], 80.00th=[ 91], 90.00th=[ 104], 95.00th=[ 109], 00:25:06.525 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 131], 99.95th=[ 132], 00:25:06.525 | 99.99th=[ 136] 00:25:06.525 bw ( KiB/s): min=134656, max=715264, per=14.32%, avg=341196.80, stdev=216434.90, samples=20 00:25:06.525 iops : min= 526, max= 2794, avg=1332.80, stdev=845.45, samples=20 00:25:06.525 lat (usec) : 750=0.04%, 1000=0.04% 00:25:06.525 lat (msec) : 2=0.51%, 4=0.28%, 10=0.60%, 20=1.15%, 50=64.98% 00:25:06.525 lat (msec) : 100=20.67%, 250=11.73% 00:25:06.525 cpu : usr=4.08%, sys=4.12%, ctx=3067, majf=0, minf=1 00:25:06.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:25:06.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.525 issued rwts: total=0,13391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.525 job5: (groupid=0, jobs=1): err= 0: pid=879814: Wed May 15 02:51:09 2024 00:25:06.525 write: IOPS=622, BW=156MiB/s (163MB/s)(1569MiB/10083msec); 0 zone resets 00:25:06.525 slat (usec): min=25, max=24090, avg=1501.55, stdev=3073.76 00:25:06.525 clat (usec): min=772, max=179763, avg=101302.32, stdev=33351.58 00:25:06.525 lat (usec): min=841, max=179820, avg=102803.86, stdev=33855.26 00:25:06.525 clat percentiles (msec): 00:25:06.525 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 50], 20.00th=[ 72], 00:25:06.525 | 30.00th=[ 94], 40.00th=[ 106], 50.00th=[ 112], 60.00th=[ 117], 00:25:06.525 | 70.00th=[ 123], 80.00th=[ 128], 90.00th=[ 134], 95.00th=[ 138], 00:25:06.525 | 99.00th=[ 150], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 176], 00:25:06.525 | 99.99th=[ 180] 00:25:06.525 bw ( KiB/s): min=124928, max=357376, per=6.67%, avg=159027.20, stdev=56849.94, samples=20 00:25:06.525 iops : min= 488, max= 1396, avg=621.20, stdev=222.07, samples=20 00:25:06.525 lat (usec) : 1000=0.08% 00:25:06.525 lat (msec) : 2=0.49%, 4=0.02%, 10=1.07%, 20=2.39%, 50=5.99% 00:25:06.525 lat (msec) : 100=24.54%, 250=65.42% 00:25:06.525 cpu : usr=1.73%, sys=2.41%, ctx=1712, majf=0, minf=1 00:25:06.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:06.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.525 issued rwts: total=0,6275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.525 job6: (groupid=0, jobs=1): err= 0: pid=879815: Wed May 15 02:51:09 2024 00:25:06.525 write: IOPS=761, BW=190MiB/s (200MB/s)(1920MiB/10085msec); 0 zone resets 00:25:06.525 slat (usec): min=24, max=65521, avg=1055.21, stdev=2849.91 00:25:06.525 clat (usec): min=905, max=188438, avg=82961.45, stdev=40994.17 00:25:06.525 lat (usec): min=972, max=188520, avg=84016.66, stdev=41624.54 00:25:06.525 clat percentiles (msec): 00:25:06.525 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 24], 20.00th=[ 41], 00:25:06.525 | 30.00th=[ 57], 40.00th=[ 78], 50.00th=[ 93], 60.00th=[ 104], 00:25:06.525 | 70.00th=[ 114], 80.00th=[ 122], 90.00th=[ 130], 95.00th=[ 136], 00:25:06.525 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 182], 00:25:06.525 | 99.99th=[ 188] 00:25:06.525 bw ( KiB/s): min=124928, max=313344, per=8.18%, avg=194969.60, stdev=59364.26, samples=20 00:25:06.525 iops : min= 488, max= 1224, avg=761.60, stdev=231.89, samples=20 00:25:06.525 lat (usec) : 1000=0.03% 00:25:06.525 lat (msec) : 2=0.76%, 4=2.72%, 10=2.51%, 20=2.66%, 50=19.18% 00:25:06.525 lat (msec) : 100=29.08%, 250=43.07% 00:25:06.525 cpu : usr=2.12%, sys=2.97%, ctx=2570, majf=0, minf=1 00:25:06.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:06.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.525 issued rwts: total=0,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.525 job7: (groupid=0, jobs=1): err= 0: pid=879816: Wed May 15 02:51:09 2024 00:25:06.525 write: IOPS=774, BW=194MiB/s (203MB/s)(1946MiB/10043msec); 0 zone resets 00:25:06.525 slat (usec): min=26, max=103021, avg=1018.39, stdev=3118.93 00:25:06.525 clat (usec): min=834, max=243619, avg=81535.16, stdev=42316.65 00:25:06.525 lat (usec): min=911, max=243702, avg=82553.55, stdev=42929.16 00:25:06.525 clat percentiles (msec): 00:25:06.525 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 12], 20.00th=[ 34], 00:25:06.525 | 30.00th=[ 63], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 102], 00:25:06.525 | 70.00th=[ 111], 80.00th=[ 120], 90.00th=[ 129], 95.00th=[ 136], 00:25:06.525 | 99.00th=[ 150], 99.50th=[ 163], 99.90th=[ 186], 99.95th=[ 207], 00:25:06.525 | 99.99th=[ 245] 00:25:06.525 bw ( KiB/s): min=131072, max=344576, per=8.30%, avg=197632.00, stdev=64910.06, samples=20 00:25:06.525 iops : min= 512, max= 1346, avg=772.00, stdev=253.55, samples=20 00:25:06.525 lat (usec) : 1000=0.09% 00:25:06.525 lat (msec) : 2=0.89%, 4=1.61%, 10=6.03%, 20=6.44%, 50=10.97% 00:25:06.525 lat (msec) : 100=33.37%, 250=40.61% 00:25:06.525 cpu : usr=1.89%, sys=3.28%, ctx=2663, majf=0, minf=1 00:25:06.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:06.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.525 issued rwts: total=0,7783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.525 job8: (groupid=0, jobs=1): err= 0: pid=879817: Wed May 15 02:51:09 2024 00:25:06.525 write: IOPS=1480, BW=370MiB/s (388MB/s)(3717MiB/10041msec); 0 zone resets 00:25:06.525 slat (usec): min=25, max=57248, avg=626.99, stdev=1809.42 00:25:06.525 clat (usec): min=849, max=159988, avg=42574.21, stdev=31493.83 00:25:06.525 lat (usec): min=1058, max=160069, avg=43201.20, stdev=31963.42 00:25:06.525 clat percentiles (msec): 00:25:06.525 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 19], 20.00th=[ 22], 00:25:06.525 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 28], 60.00th=[ 35], 00:25:06.525 | 70.00th=[ 41], 80.00th=[ 71], 90.00th=[ 94], 95.00th=[ 112], 00:25:06.525 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 150], 00:25:06.525 | 99.99th=[ 155] 00:25:06.525 bw ( KiB/s): min=128512, max=712704, per=15.91%, avg=379033.60, stdev=206741.66, samples=20 00:25:06.525 iops : min= 502, max= 2784, avg=1480.60, stdev=807.58, samples=20 00:25:06.525 lat (usec) : 1000=0.01% 00:25:06.525 lat (msec) : 2=0.05%, 4=0.26%, 10=0.45%, 20=12.74%, 50=61.69% 00:25:06.525 lat (msec) : 100=16.90%, 250=7.91% 00:25:06.525 cpu : usr=4.05%, sys=4.63%, ctx=3311, majf=0, minf=1 00:25:06.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:06.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.525 issued rwts: total=0,14869,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.525 job9: (groupid=0, jobs=1): err= 0: pid=879818: Wed May 15 02:51:09 2024 00:25:06.525 write: IOPS=743, BW=186MiB/s (195MB/s)(1875MiB/10082msec); 0 zone resets 00:25:06.525 slat (usec): min=25, max=72142, avg=1133.87, stdev=3077.67 00:25:06.525 clat (usec): min=286, max=188125, avg=84885.96, stdev=39380.17 00:25:06.525 lat (usec): min=317, max=188186, avg=86019.84, stdev=39961.68 00:25:06.525 clat percentiles (msec): 00:25:06.525 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 25], 20.00th=[ 37], 00:25:06.525 | 30.00th=[ 61], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 105], 00:25:06.525 | 70.00th=[ 111], 80.00th=[ 120], 90.00th=[ 129], 95.00th=[ 136], 00:25:06.525 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 180], 99.95th=[ 182], 00:25:06.525 | 99.99th=[ 188] 00:25:06.525 bw ( KiB/s): min=128512, max=414720, per=7.99%, avg=190336.00, stdev=88917.15, samples=20 00:25:06.525 iops : min= 502, max= 1620, avg=743.50, stdev=347.33, samples=20 00:25:06.525 lat (usec) : 500=0.04%, 750=0.08%, 1000=0.03% 00:25:06.525 lat (msec) : 2=0.09%, 4=0.28%, 10=1.37%, 20=3.65%, 50=21.47% 00:25:06.525 lat (msec) : 100=27.41%, 250=45.57% 00:25:06.525 cpu : usr=2.03%, sys=2.77%, ctx=2302, majf=0, minf=1 00:25:06.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:06.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.525 issued rwts: total=0,7498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.525 job10: (groupid=0, jobs=1): err= 0: pid=879819: Wed May 15 02:51:09 2024 00:25:06.525 write: IOPS=843, BW=211MiB/s (221MB/s)(2127MiB/10082msec); 0 zone resets 00:25:06.525 slat (usec): min=25, max=102972, avg=1000.54, stdev=3107.02 00:25:06.525 clat (usec): min=538, max=208835, avg=74802.60, stdev=47385.12 00:25:06.525 lat (usec): min=602, max=232061, avg=75803.14, stdev=48040.30 00:25:06.525 clat percentiles (usec): 00:25:06.525 | 1.00th=[ 963], 5.00th=[ 3130], 10.00th=[ 5997], 20.00th=[ 15139], 00:25:06.525 | 30.00th=[ 35914], 40.00th=[ 71828], 50.00th=[ 88605], 60.00th=[ 98042], 00:25:06.525 | 70.00th=[108528], 80.00th=[117965], 90.00th=[129500], 95.00th=[137364], 00:25:06.525 | 99.00th=[164627], 99.50th=[179307], 99.90th=[200279], 99.95th=[206570], 00:25:06.525 | 99.99th=[208667] 00:25:06.525 bw ( KiB/s): min=129024, max=393728, per=9.07%, avg=216192.00, stdev=69193.69, samples=20 00:25:06.525 iops : min= 504, max= 1538, avg=844.50, stdev=270.29, samples=20 00:25:06.525 lat (usec) : 750=0.32%, 1000=0.81% 00:25:06.525 lat (msec) : 2=1.68%, 4=3.54%, 10=9.94%, 20=5.30%, 50=14.02% 00:25:06.525 lat (msec) : 100=26.19%, 250=38.20% 00:25:06.525 cpu : usr=1.85%, sys=3.46%, ctx=2874, majf=0, minf=1 00:25:06.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:06.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.525 issued rwts: total=0,8508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.525 00:25:06.525 Run status group 0 (all jobs): 00:25:06.525 WRITE: bw=2327MiB/s (2440MB/s), 156MiB/s-370MiB/s (163MB/s-388MB/s), io=22.9GiB (24.6GB), run=10020-10085msec 00:25:06.525 00:25:06.525 Disk stats (read/write): 00:25:06.525 nvme0n1: ios=49/14081, merge=0/0, ticks=12/1234384, in_queue=1234396, util=95.59% 00:25:06.525 nvme10n1: ios=0/13286, merge=0/0, ticks=0/1234438, in_queue=1234438, util=95.86% 00:25:06.525 nvme1n1: ios=0/15130, merge=0/0, ticks=0/1234038, in_queue=1234038, util=96.36% 00:25:06.525 nvme2n1: ios=0/12662, merge=0/0, ticks=0/1237087, in_queue=1237087, util=96.65% 00:25:06.525 nvme3n1: ios=0/26626, merge=0/0, ticks=0/1237303, in_queue=1237303, util=96.78% 00:25:06.525 nvme4n1: ios=0/12418, merge=0/0, ticks=0/1228553, in_queue=1228553, util=97.39% 00:25:06.525 nvme5n1: ios=0/15232, merge=0/0, ticks=0/1236801, in_queue=1236801, util=97.66% 00:25:06.525 nvme6n1: ios=0/15426, merge=0/0, ticks=0/1235861, in_queue=1235861, util=97.87% 00:25:06.525 nvme7n1: ios=0/29609, merge=0/0, ticks=0/1233908, in_queue=1233908, util=98.62% 00:25:06.525 nvme8n1: ios=0/14860, merge=0/0, ticks=0/1231900, in_queue=1231900, util=98.89% 00:25:06.525 nvme9n1: ios=0/16881, merge=0/0, ticks=0/1233336, in_queue=1233336, util=99.07% 00:25:06.525 02:51:09 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:06.525 02:51:09 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:06.525 02:51:09 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.525 02:51:09 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:07.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK1 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.092 02:51:10 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:08.028 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:08.028 02:51:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:08.028 02:51:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:25:08.028 02:51:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:25:08.029 02:51:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK2 00:25:08.029 02:51:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:25:08.029 02:51:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:25:08.029 02:51:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:25:08.029 02:51:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:08.029 02:51:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.029 02:51:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.029 02:51:11 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.029 02:51:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.029 02:51:11 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:08.966 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK3 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.966 02:51:12 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:09.904 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:09.904 02:51:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:09.904 02:51:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:25:09.904 02:51:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:25:09.904 02:51:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK4 00:25:10.163 02:51:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:25:10.163 02:51:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:25:10.163 02:51:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:25:10.163 02:51:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:10.163 02:51:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:10.163 02:51:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:10.163 02:51:13 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:10.163 02:51:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:10.163 02:51:13 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:11.101 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK5 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.101 02:51:14 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:12.039 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK6 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.039 02:51:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:12.978 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK7 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.978 02:51:16 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:13.914 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:13.914 02:51:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:13.914 02:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:25:13.914 02:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:25:13.914 02:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK8 00:25:13.914 02:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:25:13.914 02:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:25:13.914 02:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:25:13.914 02:51:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:13.914 02:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:13.914 02:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.174 02:51:17 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:14.174 02:51:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.174 02:51:17 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:15.111 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK9 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.111 02:51:18 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:16.049 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK10 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.049 02:51:19 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:16.988 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK11 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:16.988 rmmod nvme_rdma 00:25:16.988 rmmod nvme_fabrics 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 873505 ']' 00:25:16.988 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 873505 00:25:16.989 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@947 -- # '[' -z 873505 ']' 00:25:16.989 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@951 -- # kill -0 873505 00:25:16.989 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@952 -- # uname 00:25:16.989 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:16.989 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 873505 00:25:17.247 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:17.247 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:17.247 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@965 -- # echo 'killing process with pid 873505' 00:25:17.247 killing process with pid 873505 00:25:17.247 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@966 -- # kill 873505 00:25:17.247 [2024-05-15 02:51:20.282685] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:17.247 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@971 -- # wait 873505 00:25:17.247 [2024-05-15 02:51:20.390459] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:25:17.869 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:17.869 02:51:20 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:17.869 00:25:17.869 real 1m15.167s 00:25:17.869 user 4m40.717s 00:25:17.869 sys 0m18.668s 00:25:17.869 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:17.869 02:51:20 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.869 ************************************ 00:25:17.869 END TEST nvmf_multiconnection 00:25:17.869 ************************************ 00:25:17.869 02:51:20 nvmf_rdma -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:25:17.869 02:51:20 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:17.869 02:51:20 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:17.869 02:51:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:17.869 ************************************ 00:25:17.869 START TEST nvmf_initiator_timeout 00:25:17.869 ************************************ 00:25:17.869 02:51:20 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:25:17.869 * Looking for test storage... 00:25:17.869 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:17.869 02:51:21 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:25:24.442 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:25:24.442 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:25:24.442 Found net devices under 0000:18:00.0: mlx_0_0 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:25:24.442 Found net devices under 0000:18:00.1: mlx_0_1 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:24.442 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # rdma_device_init 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # uname 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:24.443 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:24.443 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:25:24.443 altname enp24s0f0np0 00:25:24.443 altname ens785f0np0 00:25:24.443 inet 192.168.100.8/24 scope global mlx_0_0 00:25:24.443 valid_lft forever preferred_lft forever 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:24.443 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:24.443 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:25:24.443 altname enp24s0f1np1 00:25:24.443 altname ens785f1np1 00:25:24.443 inet 192.168.100.9/24 scope global mlx_0_1 00:25:24.443 valid_lft forever preferred_lft forever 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:24.443 192.168.100.9' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:24.443 192.168.100.9' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # head -n 1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:24.443 192.168.100.9' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # tail -n +2 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # head -n 1 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=885857 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 885857 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@828 -- # '[' -z 885857 ']' 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:24.443 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.444 [2024-05-15 02:51:27.461664] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:25:24.444 [2024-05-15 02:51:27.461736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.444 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.444 [2024-05-15 02:51:27.571643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:24.444 [2024-05-15 02:51:27.619756] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.444 [2024-05-15 02:51:27.619805] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.444 [2024-05-15 02:51:27.619819] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.444 [2024-05-15 02:51:27.619832] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.444 [2024-05-15 02:51:27.619843] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.444 [2024-05-15 02:51:27.619908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.444 [2024-05-15 02:51:27.619968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.444 [2024-05-15 02:51:27.620074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.444 [2024-05-15 02:51:27.620075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.444 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:24.444 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@861 -- # return 0 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.703 Malloc0 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.703 Delay0 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.703 02:51:27 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.703 [2024-05-15 02:51:27.845784] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2107b60/0x225fa00) succeed. 00:25:24.703 [2024-05-15 02:51:27.861269] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21b7d40/0x211f800) succeed. 00:25:24.962 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.963 [2024-05-15 02:51:28.030840] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:24.963 [2024-05-15 02:51:28.031247] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:24.963 02:51:28 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:25:25.905 02:51:29 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:25.905 02:51:29 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local i=0 00:25:25.905 02:51:29 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.905 02:51:29 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:25:25.905 02:51:29 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # sleep 2 00:25:27.805 02:51:31 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:25:27.805 02:51:31 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:25:27.805 02:51:31 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:25:27.805 02:51:31 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:25:27.805 02:51:31 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:25:27.805 02:51:31 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # return 0 00:25:27.805 02:51:31 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=886374 00:25:27.805 02:51:31 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:27.805 02:51:31 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:27.805 [global] 00:25:27.805 thread=1 00:25:27.805 invalidate=1 00:25:27.805 rw=write 00:25:27.805 time_based=1 00:25:27.805 runtime=60 00:25:27.805 ioengine=libaio 00:25:27.805 direct=1 00:25:27.805 bs=4096 00:25:27.805 iodepth=1 00:25:27.805 norandommap=0 00:25:27.805 numjobs=1 00:25:27.805 00:25:27.805 verify_dump=1 00:25:27.805 verify_backlog=512 00:25:27.805 verify_state_save=0 00:25:27.805 do_verify=1 00:25:27.805 verify=crc32c-intel 00:25:27.805 [job0] 00:25:27.805 filename=/dev/nvme0n1 00:25:28.063 Could not set queue depth (nvme0n1) 00:25:28.063 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:28.063 fio-3.35 00:25:28.063 Starting 1 thread 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.351 true 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.351 true 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.351 true 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:31.351 true 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:31.351 02:51:34 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.888 true 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.888 true 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.888 true 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:33.888 true 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:33.888 02:51:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 886374 00:26:30.129 00:26:30.129 job0: (groupid=0, jobs=1): err= 0: pid=886473: Wed May 15 02:52:31 2024 00:26:30.129 read: IOPS=1069, BW=4279KiB/s (4381kB/s)(251MiB/60000msec) 00:26:30.129 slat (usec): min=5, max=15854, avg= 9.86, stdev=86.53 00:26:30.129 clat (usec): min=89, max=42382k, avg=786.14, stdev=167296.06 00:26:30.129 lat (usec): min=100, max=42382k, avg=795.99, stdev=167296.08 00:26:30.129 clat percentiles (usec): 00:26:30.129 | 1.00th=[ 99], 5.00th=[ 104], 10.00th=[ 110], 20.00th=[ 118], 00:26:30.129 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 129], 00:26:30.129 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:26:30.129 | 99.00th=[ 151], 99.50th=[ 161], 99.90th=[ 253], 99.95th=[ 306], 00:26:30.129 | 99.99th=[ 416] 00:26:30.129 write: IOPS=1075, BW=4301KiB/s (4404kB/s)(252MiB/60000msec); 0 zone resets 00:26:30.129 slat (usec): min=6, max=388, avg=11.65, stdev= 2.92 00:26:30.129 clat (usec): min=86, max=481, avg=122.46, stdev=13.36 00:26:30.129 lat (usec): min=100, max=516, avg=134.11, stdev=13.75 00:26:30.129 clat percentiles (usec): 00:26:30.129 | 1.00th=[ 96], 5.00th=[ 101], 10.00th=[ 106], 20.00th=[ 115], 00:26:30.129 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 126], 00:26:30.129 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 135], 95.00th=[ 139], 00:26:30.129 | 99.00th=[ 149], 99.50th=[ 169], 99.90th=[ 239], 99.95th=[ 293], 00:26:30.129 | 99.99th=[ 408] 00:26:30.129 bw ( KiB/s): min= 4096, max=17328, per=100.00%, avg=14394.51, stdev=2244.71, samples=35 00:26:30.129 iops : min= 1024, max= 4332, avg=3598.63, stdev=561.18, samples=35 00:26:30.129 lat (usec) : 100=2.73%, 250=97.17%, 500=0.09%, 750=0.01% 00:26:30.129 lat (msec) : 2=0.01%, >=2000=0.01% 00:26:30.129 cpu : usr=1.22%, sys=2.38%, ctx=128699, majf=0, minf=104 00:26:30.129 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.129 issued rwts: total=64178,64512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:30.129 00:26:30.129 Run status group 0 (all jobs): 00:26:30.129 READ: bw=4279KiB/s (4381kB/s), 4279KiB/s-4279KiB/s (4381kB/s-4381kB/s), io=251MiB (263MB), run=60000-60000msec 00:26:30.129 WRITE: bw=4301KiB/s (4404kB/s), 4301KiB/s-4301KiB/s (4404kB/s-4404kB/s), io=252MiB (264MB), run=60000-60000msec 00:26:30.129 00:26:30.129 Disk stats (read/write): 00:26:30.129 nvme0n1: ios=64245/64000, merge=0/0, ticks=7757/7415, in_queue=15172, util=99.74% 00:26:30.129 02:52:31 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:30.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # local i=0 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1228 -- # return 0 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:30.129 nvmf hotplug test: fio successful as expected 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:30.129 rmmod nvme_rdma 00:26:30.129 rmmod nvme_fabrics 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 885857 ']' 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 885857 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@947 -- # '[' -z 885857 ']' 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # kill -0 885857 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # uname 00:26:30.129 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:30.130 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 885857 00:26:30.130 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:30.130 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:30.130 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 885857' 00:26:30.130 killing process with pid 885857 00:26:30.130 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # kill 885857 00:26:30.130 [2024-05-15 02:52:32.612794] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:30.130 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # wait 885857 00:26:30.130 [2024-05-15 02:52:32.728503] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:26:30.130 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:30.130 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:30.130 00:26:30.130 real 1m12.056s 00:26:30.130 user 4m24.939s 00:26:30.130 sys 0m7.526s 00:26:30.130 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:30.130 02:52:32 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.130 ************************************ 00:26:30.130 END TEST nvmf_initiator_timeout 00:26:30.130 ************************************ 00:26:30.130 02:52:33 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:30.130 02:52:33 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:26:30.130 02:52:33 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:26:30.130 02:52:33 nvmf_rdma -- nvmf/nvmf.sh@79 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:26:30.130 02:52:33 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:30.130 02:52:33 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:30.130 02:52:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:30.130 ************************************ 00:26:30.130 START TEST nvmf_device_removal 00:26:30.130 ************************************ 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1122 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:26:30.130 * Looking for test storage... 00:26:30.130 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@34 -- # set -e 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@22 -- # CONFIG_CET=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@70 -- # CONFIG_FC=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:26:30.130 02:52:33 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@83 -- # CONFIG_URING=n 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:30.131 #define SPDK_CONFIG_H 00:26:30.131 #define SPDK_CONFIG_APPS 1 00:26:30.131 #define SPDK_CONFIG_ARCH native 00:26:30.131 #undef SPDK_CONFIG_ASAN 00:26:30.131 #undef SPDK_CONFIG_AVAHI 00:26:30.131 #undef SPDK_CONFIG_CET 00:26:30.131 #define SPDK_CONFIG_COVERAGE 1 00:26:30.131 #define SPDK_CONFIG_CROSS_PREFIX 00:26:30.131 #undef SPDK_CONFIG_CRYPTO 00:26:30.131 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:30.131 #undef SPDK_CONFIG_CUSTOMOCF 00:26:30.131 #undef SPDK_CONFIG_DAOS 00:26:30.131 #define SPDK_CONFIG_DAOS_DIR 00:26:30.131 #define SPDK_CONFIG_DEBUG 1 00:26:30.131 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:30.131 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:26:30.131 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:26:30.131 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:26:30.131 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:30.131 #undef SPDK_CONFIG_DPDK_UADK 00:26:30.131 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:26:30.131 #define SPDK_CONFIG_EXAMPLES 1 00:26:30.131 #undef SPDK_CONFIG_FC 00:26:30.131 #define SPDK_CONFIG_FC_PATH 00:26:30.131 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:30.131 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:30.131 #undef SPDK_CONFIG_FUSE 00:26:30.131 #undef SPDK_CONFIG_FUZZER 00:26:30.131 #define SPDK_CONFIG_FUZZER_LIB 00:26:30.131 #undef SPDK_CONFIG_GOLANG 00:26:30.131 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:26:30.131 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:26:30.131 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:30.131 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:26:30.131 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:30.131 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:30.131 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:30.131 #define SPDK_CONFIG_IDXD 1 00:26:30.131 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:30.131 #undef SPDK_CONFIG_IPSEC_MB 00:26:30.131 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:30.131 #define SPDK_CONFIG_ISAL 1 00:26:30.131 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:30.131 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:30.131 #define SPDK_CONFIG_LIBDIR 00:26:30.131 #undef SPDK_CONFIG_LTO 00:26:30.131 #define SPDK_CONFIG_MAX_LCORES 00:26:30.131 #define SPDK_CONFIG_NVME_CUSE 1 00:26:30.131 #undef SPDK_CONFIG_OCF 00:26:30.131 #define SPDK_CONFIG_OCF_PATH 00:26:30.131 #define SPDK_CONFIG_OPENSSL_PATH 00:26:30.131 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:30.131 #define SPDK_CONFIG_PGO_DIR 00:26:30.131 #undef SPDK_CONFIG_PGO_USE 00:26:30.131 #define SPDK_CONFIG_PREFIX /usr/local 00:26:30.131 #undef SPDK_CONFIG_RAID5F 00:26:30.131 #undef SPDK_CONFIG_RBD 00:26:30.131 #define SPDK_CONFIG_RDMA 1 00:26:30.131 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:30.131 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:30.131 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:30.131 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:30.131 #define SPDK_CONFIG_SHARED 1 00:26:30.131 #undef SPDK_CONFIG_SMA 00:26:30.131 #define SPDK_CONFIG_TESTS 1 00:26:30.131 #undef SPDK_CONFIG_TSAN 00:26:30.131 #define SPDK_CONFIG_UBLK 1 00:26:30.131 #define SPDK_CONFIG_UBSAN 1 00:26:30.131 #undef SPDK_CONFIG_UNIT_TESTS 00:26:30.131 #undef SPDK_CONFIG_URING 00:26:30.131 #define SPDK_CONFIG_URING_PATH 00:26:30.131 #undef SPDK_CONFIG_URING_ZNS 00:26:30.131 #undef SPDK_CONFIG_USDT 00:26:30.131 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:30.131 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:30.131 #undef SPDK_CONFIG_VFIO_USER 00:26:30.131 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:30.131 #define SPDK_CONFIG_VHOST 1 00:26:30.131 #define SPDK_CONFIG_VIRTIO 1 00:26:30.131 #undef SPDK_CONFIG_VTUNE 00:26:30.131 #define SPDK_CONFIG_VTUNE_DIR 00:26:30.131 #define SPDK_CONFIG_WERROR 1 00:26:30.131 #define SPDK_CONFIG_WPDK_DIR 00:26:30.131 #undef SPDK_CONFIG_XNVME 00:26:30.131 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@64 -- # TEST_TAG=N/A 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # uname -s 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # PM_OS=Linux 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[0]= 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[1]='sudo -E' 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ Linux == Linux ]] 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@58 -- # : 1 00:26:30.131 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@62 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@64 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@66 -- # : 1 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@68 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@70 -- # : 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@72 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@74 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@76 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@78 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@80 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@82 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@84 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@86 -- # : 1 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@88 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@90 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@92 -- # : 1 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@94 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@96 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@98 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@100 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@102 -- # : rdma 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@104 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@106 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@108 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@110 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@112 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@114 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@116 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@118 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@120 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@122 -- # : 1 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@126 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@128 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@130 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@132 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@134 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@136 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@138 -- # : v23.11 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@140 -- # : true 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@142 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@144 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@146 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@148 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@150 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@152 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@154 -- # : mlx5 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@156 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@158 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@160 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@162 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@164 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@167 -- # : 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@169 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@171 -- # : 0 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:30.132 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@200 -- # cat 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@263 -- # export valgrind= 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@263 -- # valgrind= 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # uname -s 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@279 -- # MAKE=make 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@299 -- # TEST_MODE= 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@300 -- # for i in "$@" 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@301 -- # case "$i" in 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@318 -- # [[ -z 894405 ]] 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@318 -- # kill -0 894405 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@331 -- # local mount target_dir 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.uFWEhR 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.uFWEhR/tests/target /tmp/spdk.uFWEhR 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # df -T 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:26:30.133 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=972910592 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=4311519232 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=54765600768 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742718976 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=6977118208 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=30858063872 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871359488 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=13295616 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=12325736448 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348547072 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=22810624 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=30870925312 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871359488 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=434176 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=6174265344 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174269440 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:26:30.134 * Looking for test storage... 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@368 -- # local target_space new_size 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # mount=/ 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@374 -- # target_space=54765600768 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@381 -- # new_size=9191710720 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:30.134 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@389 -- # return 0 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1679 -- # set -o errtrace 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1684 -- # true 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1686 -- # xtrace_fd 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@27 -- # exec 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@29 -- # exec 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@18 -- # set -x 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # uname -s 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:26:30.134 02:52:33 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@47 -- # : 0 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@18 -- # nvmftestinit 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@285 -- # xtrace_disable 00:26:30.135 02:52:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # pci_devs=() 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # net_devs=() 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # e810=() 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # local -ga e810 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # x722=() 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # local -ga x722 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # mlx=() 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # local -ga mlx 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:26:36.745 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:26:36.745 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:26:36.745 Found net devices under 0000:18:00.0: mlx_0_0 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:26:36.745 Found net devices under 0000:18:00.1: mlx_0_1 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # is_hw=yes 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@420 -- # rdma_device_init 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # uname 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:36.745 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:36.746 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:36.746 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:26:36.746 altname enp24s0f0np0 00:26:36.746 altname ens785f0np0 00:26:36.746 inet 192.168.100.8/24 scope global mlx_0_0 00:26:36.746 valid_lft forever preferred_lft forever 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:36.746 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:36.746 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:26:36.746 altname enp24s0f1np1 00:26:36.746 altname ens785f1np1 00:26:36.746 inet 192.168.100.9/24 scope global mlx_0_1 00:26:36.746 valid_lft forever preferred_lft forever 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@422 -- # return 0 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:36.746 02:52:39 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:36.746 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:37.006 192.168.100.9' 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:37.006 192.168.100.9' 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # head -n 1 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:37.006 192.168.100.9' 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # tail -n +2 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # head -n 1 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@237 -- # BOND_MASK=24 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:26:37.006 ************************************ 00:26:37.006 START TEST nvmf_device_removal_pci_remove_no_srq 00:26:37.006 ************************************ 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1122 -- # test_remove_and_rescan --no-srq 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@481 -- # nvmfpid=897287 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@482 -- # waitforlisten 897287 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@828 -- # '[' -z 897287 ']' 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:37.006 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:37.006 [2024-05-15 02:52:40.213234] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:26:37.006 [2024-05-15 02:52:40.213301] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.006 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.273 [2024-05-15 02:52:40.315707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:37.273 [2024-05-15 02:52:40.366561] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.273 [2024-05-15 02:52:40.366614] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.273 [2024-05-15 02:52:40.366629] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.273 [2024-05-15 02:52:40.366647] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.273 [2024-05-15 02:52:40.366658] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.273 [2024-05-15 02:52:40.366723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.273 [2024-05-15 02:52:40.366729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.273 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:37.273 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@861 -- # return 0 00:26:37.273 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:37.273 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:37.273 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:37.273 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:37.273 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:26:37.273 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:26:37.273 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:26:37.273 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:26:37.273 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.273 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:37.273 [2024-05-15 02:52:40.551775] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11ac7a0/0x11b0c90) succeed. 00:26:37.535 [2024-05-15 02:52:40.565278] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11adca0/0x11f2320) succeed. 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # get_rdma_if_list 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:37.535 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:37.536 [2024-05-15 02:52:40.707263] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:37.536 [2024-05-15 02:52:40.707620] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:37.536 [2024-05-15 02:52:40.798251] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@53 -- # return 0 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # local dev_names 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@91 -- # bdevperf_pid=897395 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@94 -- # waitforlisten 897395 /var/tmp/bdevperf.sock 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@828 -- # '[' -z 897395 ']' 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:37.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:37.536 02:52:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:38.474 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:38.474 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@861 -- # return 0 00:26:38.474 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:38.474 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.474 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:38.474 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.474 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:38.734 Nvme_mlx_0_0n1 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:38.734 Nvme_mlx_0_1n1 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=897527 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@112 -- # sleep 5 00:26:38.734 02:52:41 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:44.006 02:52:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:26:44.006 02:52:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:26:44.006 02:52:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:26:44.006 02:52:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:26:44.006 02:52:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:26:44.006 02:52:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:26:44.006 02:52:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:26:44.006 02:52:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/infiniband 00:26:44.006 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:44.007 mlx5_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:26:44.007 02:52:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:26:44.007 [2024-05-15 02:52:47.132583] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:26:44.007 [2024-05-15 02:52:47.132782] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:44.007 [2024-05-15 02:52:47.135751] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:44.007 [2024-05-15 02:52:47.135780] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 95 00:26:44.007 [2024-05-15 02:52:47.135795] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:26:44.007 [2024-05-15 02:52:47.135808] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.135825] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.135836] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.135847] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.135858] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.135869] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.135880] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.135891] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.135910] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.135922] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.135932] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.135944] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.135955] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.135966] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.135976] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.135987] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.135998] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136009] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136021] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136031] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136042] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136052] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136063] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136075] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136085] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136096] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136107] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136117] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136128] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136139] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136150] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136161] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136171] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136183] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136194] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136205] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136216] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136228] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136239] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136250] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136261] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136271] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136284] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136295] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136306] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136317] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136327] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136338] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136351] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136363] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136376] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136388] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136400] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136410] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136421] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136432] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136442] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136453] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136464] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136474] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136486] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136496] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136507] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136518] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136529] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136539] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136550] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136560] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136576] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136587] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136598] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136609] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136620] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136631] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136642] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136653] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136663] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136674] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136684] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.007 [2024-05-15 02:52:47.136696] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.007 [2024-05-15 02:52:47.136706] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136717] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136727] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136738] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136751] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136762] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136773] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136783] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136794] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136806] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136816] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136827] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136837] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136849] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136860] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136870] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136881] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136891] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136908] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136919] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136931] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136943] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136954] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136965] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136976] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.136987] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.136998] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137008] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137020] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137030] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137041] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137052] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137063] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137074] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137085] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137099] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137110] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137121] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137132] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137143] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137154] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137164] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137175] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137185] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137197] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137207] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137220] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137231] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137242] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137253] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137264] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137274] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137286] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137297] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137309] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137319] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137330] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137341] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137351] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137363] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137374] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137384] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137395] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137405] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137416] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137428] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137438] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137449] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137460] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137471] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137482] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137493] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137504] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137515] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137526] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137536] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137547] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137558] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137568] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137579] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137590] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137601] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137611] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137622] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137635] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137647] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137658] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137669] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137686] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137698] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137709] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137720] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137731] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137741] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137752] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137763] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137774] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137784] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137795] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137806] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137817] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137828] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137839] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137850] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137861] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137871] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137882] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137893] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:44.008 [2024-05-15 02:52:47.137909] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:44.008 [2024-05-15 02:52:47.137920] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:50.575 02:52:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.575 02:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:26:50.575 02:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:26:50.575 02:52:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:26:50.834 [2024-05-15 02:52:53.931929] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x13e9df0, err 11. Skip rescan. 00:26:50.834 02:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:26:50.834 02:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:26:50.834 02:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/net 00:26:50.834 02:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:26:50.834 02:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:26:50.834 02:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:26:50.834 02:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:26:50.834 02:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:26:50.834 02:52:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:26:51.093 [2024-05-15 02:52:54.320934] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11adaf0/0x11b0c90) succeed. 00:26:51.093 [2024-05-15 02:52:54.321018] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:54.382 [2024-05-15 02:52:57.324947] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:54.382 [2024-05-15 02:52:57.324995] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:26:54.382 [2024-05-15 02:52:57.325022] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:26:54.382 [2024-05-15 02:52:57.325045] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/infiniband 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.382 mlx5_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:26:54.382 02:52:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:26:54.382 [2024-05-15 02:52:57.505778] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:26:54.382 [2024-05-15 02:52:57.505870] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:54.382 [2024-05-15 02:52:57.514159] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:54.382 [2024-05-15 02:52:57.514183] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 96 00:26:54.382 [2024-05-15 02:52:57.514195] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:26:54.382 [2024-05-15 02:52:57.514207] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.382 [2024-05-15 02:52:57.514219] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.382 [2024-05-15 02:52:57.514230] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.382 [2024-05-15 02:52:57.514240] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.382 [2024-05-15 02:52:57.514252] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.382 [2024-05-15 02:52:57.514262] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.382 [2024-05-15 02:52:57.514274] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.382 [2024-05-15 02:52:57.514285] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.382 [2024-05-15 02:52:57.514296] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.382 [2024-05-15 02:52:57.514307] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.382 [2024-05-15 02:52:57.514318] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.382 [2024-05-15 02:52:57.514334] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.382 [2024-05-15 02:52:57.514345] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.382 [2024-05-15 02:52:57.514356] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.382 [2024-05-15 02:52:57.514367] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.382 [2024-05-15 02:52:57.514378] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.382 [2024-05-15 02:52:57.514389] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.382 [2024-05-15 02:52:57.514400] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.382 [2024-05-15 02:52:57.514411] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514422] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.514433] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514444] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514455] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514466] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514477] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514487] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514498] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514509] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514520] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514531] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.514542] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.383 [2024-05-15 02:52:57.514552] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514563] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514575] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514587] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514599] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.514610] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514621] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514632] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514642] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.514654] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514665] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.514675] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514686] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.514697] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514709] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.514720] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514731] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.514742] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.383 [2024-05-15 02:52:57.514752] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514763] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514774] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.514785] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514798] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514808] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.383 [2024-05-15 02:52:57.514819] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514830] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514841] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514854] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514865] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514876] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514887] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.514907] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514918] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514929] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514940] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.514951] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514962] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.514973] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.514984] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.514995] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515005] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.515017] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515028] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.515041] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515053] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.515065] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515076] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515086] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515097] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.515108] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515118] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515129] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515141] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.515152] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.383 [2024-05-15 02:52:57.515162] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515173] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515184] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515195] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515206] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515217] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515228] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515238] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515249] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515260] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515273] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515284] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515294] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515305] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515316] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.515327] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515337] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515349] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.383 [2024-05-15 02:52:57.515360] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515371] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515381] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515392] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515402] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515413] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515424] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.515435] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515448] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.515459] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.383 [2024-05-15 02:52:57.515470] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515481] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.383 [2024-05-15 02:52:57.515491] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515502] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515513] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515524] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515535] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.515545] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.383 [2024-05-15 02:52:57.515557] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515568] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.383 [2024-05-15 02:52:57.515578] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515589] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515600] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515611] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515621] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515632] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515642] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.515653] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515664] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.383 [2024-05-15 02:52:57.515675] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.383 [2024-05-15 02:52:57.515686] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515697] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515710] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515720] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.383 [2024-05-15 02:52:57.515733] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.383 [2024-05-15 02:52:57.515744] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.383 [2024-05-15 02:52:57.515754] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.384 [2024-05-15 02:52:57.515765] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.515776] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.384 [2024-05-15 02:52:57.515786] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.515798] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.515809] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.515820] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.515831] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.515842] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.515852] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.515863] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.515874] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.515884] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.515900] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.515912] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.515923] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.515934] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.384 [2024-05-15 02:52:57.515945] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.384 [2024-05-15 02:52:57.515962] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.384 [2024-05-15 02:52:57.515973] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.384 [2024-05-15 02:52:57.515988] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.384 [2024-05-15 02:52:57.516002] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.384 [2024-05-15 02:52:57.516016] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.384 [2024-05-15 02:52:57.516027] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.516041] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.516052] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.516067] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.516078] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.516091] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.516106] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.384 [2024-05-15 02:52:57.516118] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.384 [2024-05-15 02:52:57.516132] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.516146] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.384 [2024-05-15 02:52:57.516160] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.516171] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.516182] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.516194] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.516205] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.384 [2024-05-15 02:52:57.516215] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.384 [2024-05-15 02:52:57.516226] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.516239] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.516250] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.516260] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.516271] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.384 [2024-05-15 02:52:57.516282] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.384 [2024-05-15 02:52:57.516293] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:26:54.384 [2024-05-15 02:52:57.516305] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.384 [2024-05-15 02:52:57.516316] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.516326] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:26:54.384 [2024-05-15 02:52:57.516337] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.516348] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:26:54.384 [2024-05-15 02:52:57.516359] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:26:54.384 [2024-05-15 02:52:57.516369] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:27:00.953 02:53:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:27:00.953 [2024-05-15 02:53:04.222756] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x14e3c80, err 11. Skip rescan. 00:27:00.953 [2024-05-15 02:53:04.228037] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x14e3c80, err 11. Skip rescan. 00:27:01.212 02:53:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:27:01.212 02:53:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:27:01.212 02:53:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/net 00:27:01.212 02:53:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:27:01.212 02:53:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:27:01.212 02:53:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:27:01.212 02:53:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:27:01.212 02:53:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:27:01.212 02:53:04 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:27:01.472 [2024-05-15 02:53:04.610060] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13eb070/0x11f2320) succeed. 00:27:01.472 [2024-05-15 02:53:04.610158] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:27:04.759 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:27:04.759 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:27:04.760 [2024-05-15 02:53:07.660944] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:27:04.760 [2024-05-15 02:53:07.661019] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:27:04.760 [2024-05-15 02:53:07.661057] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:27:04.760 [2024-05-15 02:53:07.661092] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@200 -- # stop_bdevperf 00:27:04.760 02:53:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@116 -- # wait 897527 00:28:12.508 0 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@118 -- # killprocess 897395 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@947 -- # '[' -z 897395 ']' 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # kill -0 897395 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # uname 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 897395 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 897395' 00:28:12.508 killing process with pid 897395 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@966 -- # kill 897395 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@971 -- # wait 897395 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@119 -- # bdevperf_pid= 00:28:12.508 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:28:12.508 [2024-05-15 02:52:40.858176] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:28:12.508 [2024-05-15 02:52:40.858252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid897395 ] 00:28:12.508 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.508 [2024-05-15 02:52:40.942391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.508 [2024-05-15 02:52:40.983342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.508 Running I/O for 90 seconds... 00:28:12.508 [2024-05-15 02:52:47.131133] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:28:12.509 [2024-05-15 02:52:47.131174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.509 [2024-05-15 02:52:47.131188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32712 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:28:12.509 [2024-05-15 02:52:47.131202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.509 [2024-05-15 02:52:47.131213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32712 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:28:12.509 [2024-05-15 02:52:47.131224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.509 [2024-05-15 02:52:47.131238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32712 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:28:12.509 [2024-05-15 02:52:47.131248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.509 [2024-05-15 02:52:47.131258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32712 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:28:12.509 [2024-05-15 02:52:47.132810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.509 [2024-05-15 02:52:47.132827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:28:12.509 [2024-05-15 02:52:47.132864] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:28:12.509 [2024-05-15 02:52:47.141123] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.151146] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.161173] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.171201] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.181225] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.191253] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.201278] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.211530] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.221549] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.231953] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.241978] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.252004] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.262637] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.272663] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.282826] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.293189] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.303216] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.313675] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.323701] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.333727] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.344264] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.354290] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.364572] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.375024] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.385049] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.395541] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.405565] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.415592] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.426076] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.436101] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.446240] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.456722] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.466741] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.477252] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.487276] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.497302] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.507959] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.517987] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.528135] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.538535] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.548560] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.559049] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.569074] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.579100] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.589606] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.599632] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.609789] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.620193] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.630219] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.640616] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.650641] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.660666] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.671250] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.681276] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.691512] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.702016] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.712033] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.722609] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.732633] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.742661] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.753134] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.763161] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.773463] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.783918] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.793943] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.804761] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.814788] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.824813] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.835272] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.845295] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.855373] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.865711] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.875737] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.886011] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.896036] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.906061] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.916318] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.926352] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.936419] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.509 [2024-05-15 02:52:47.946865] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:47.956890] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:47.967343] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:47.977368] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:47.987395] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:47.997751] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.007776] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.017802] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.028062] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.038087] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.048437] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.058513] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.068538] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.078997] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.089024] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.099050] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.109299] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.119326] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.129524] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.510 [2024-05-15 02:52:48.135394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.135988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.135998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.136009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.136018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.136030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.136039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.510 [2024-05-15 02:52:48.136050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x180200 00:28:12.510 [2024-05-15 02:52:48.136060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x180200 00:28:12.511 [2024-05-15 02:52:48.136828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.511 [2024-05-15 02:52:48.136840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x180200 00:28:12.512 [2024-05-15 02:52:48.136850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.136861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x180200 00:28:12.512 [2024-05-15 02:52:48.136872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.136883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x180200 00:28:12.512 [2024-05-15 02:52:48.136892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.136908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x180200 00:28:12.512 [2024-05-15 02:52:48.136918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.136930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x180200 00:28:12.512 [2024-05-15 02:52:48.136939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.136950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x180200 00:28:12.512 [2024-05-15 02:52:48.136959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.136970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x180200 00:28:12.512 [2024-05-15 02:52:48.136980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.136991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x180200 00:28:12.512 [2024-05-15 02:52:48.137000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x180200 00:28:12.512 [2024-05-15 02:52:48.137020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x180200 00:28:12.512 [2024-05-15 02:52:48.137041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.512 [2024-05-15 02:52:48.137638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.512 [2024-05-15 02:52:48.137649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.137987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.137996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.138007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.513 [2024-05-15 02:52:48.138016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.151070] rdma_verbs.c: 83:spdk_rdma_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:28:12.513 [2024-05-15 02:52:48.151138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.513 [2024-05-15 02:52:48.151149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.513 [2024-05-15 02:52:48.151158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104824 len:8 PRP1 0x0 PRP2 0x0 00:28:12.513 [2024-05-15 02:52:48.151168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.151182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.513 [2024-05-15 02:52:48.151190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.513 [2024-05-15 02:52:48.151198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104832 len:8 PRP1 0x0 PRP2 0x0 00:28:12.513 [2024-05-15 02:52:48.151208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.513 [2024-05-15 02:52:48.152609] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:28:12.513 [2024-05-15 02:52:48.152891] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:28:12.513 [2024-05-15 02:52:48.152918] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.513 [2024-05-15 02:52:48.152928] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:12.513 [2024-05-15 02:52:48.152948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.513 [2024-05-15 02:52:48.152960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:28:12.513 [2024-05-15 02:52:48.152974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:28:12.513 [2024-05-15 02:52:48.152983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:28:12.513 [2024-05-15 02:52:48.152994] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:28:12.513 [2024-05-15 02:52:48.153016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.513 [2024-05-15 02:52:48.153025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:28:12.513 [2024-05-15 02:52:49.156012] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:28:12.513 [2024-05-15 02:52:49.156045] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.513 [2024-05-15 02:52:49.156054] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:12.513 [2024-05-15 02:52:49.156075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.513 [2024-05-15 02:52:49.156086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:28:12.513 [2024-05-15 02:52:49.156125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:28:12.513 [2024-05-15 02:52:49.156136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:28:12.513 [2024-05-15 02:52:49.156147] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:28:12.513 [2024-05-15 02:52:49.156171] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.513 [2024-05-15 02:52:49.156181] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:28:12.513 [2024-05-15 02:52:50.158658] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:28:12.513 [2024-05-15 02:52:50.158700] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.513 [2024-05-15 02:52:50.158709] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:12.513 [2024-05-15 02:52:50.158736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.514 [2024-05-15 02:52:50.158748] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:28:12.514 [2024-05-15 02:52:50.158762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:28:12.514 [2024-05-15 02:52:50.158773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:28:12.514 [2024-05-15 02:52:50.158784] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:28:12.514 [2024-05-15 02:52:50.158808] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.514 [2024-05-15 02:52:50.158817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:28:12.514 [2024-05-15 02:52:52.163728] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.514 [2024-05-15 02:52:52.163766] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:12.514 [2024-05-15 02:52:52.163795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.514 [2024-05-15 02:52:52.163806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:28:12.514 [2024-05-15 02:52:52.163820] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:28:12.514 [2024-05-15 02:52:52.163829] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:28:12.514 [2024-05-15 02:52:52.163841] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:28:12.514 [2024-05-15 02:52:52.163866] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.514 [2024-05-15 02:52:52.163876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:28:12.514 [2024-05-15 02:52:54.168850] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.514 [2024-05-15 02:52:54.168880] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:12.514 [2024-05-15 02:52:54.168913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.514 [2024-05-15 02:52:54.168925] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:28:12.514 [2024-05-15 02:52:54.168939] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:28:12.514 [2024-05-15 02:52:54.168948] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:28:12.514 [2024-05-15 02:52:54.168960] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:28:12.514 [2024-05-15 02:52:54.168986] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.514 [2024-05-15 02:52:54.168997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:28:12.514 [2024-05-15 02:52:56.173966] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.514 [2024-05-15 02:52:56.173994] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:12.514 [2024-05-15 02:52:56.174020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.514 [2024-05-15 02:52:56.174032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:28:12.514 [2024-05-15 02:52:56.174045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:28:12.514 [2024-05-15 02:52:56.174055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:28:12.514 [2024-05-15 02:52:56.174066] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:28:12.514 [2024-05-15 02:52:56.174085] bdev_nvme.c:2873:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:28:12.514 [2024-05-15 02:52:56.174104] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.514 [2024-05-15 02:52:56.174126] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:28:12.514 [2024-05-15 02:52:57.176554] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.514 [2024-05-15 02:52:57.176583] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:12.514 [2024-05-15 02:52:57.176614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.514 [2024-05-15 02:52:57.176625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:28:12.514 [2024-05-15 02:52:57.176639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:28:12.514 [2024-05-15 02:52:57.176649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:28:12.514 [2024-05-15 02:52:57.176659] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:28:12.514 [2024-05-15 02:52:57.176683] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.514 [2024-05-15 02:52:57.176693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:28:12.514 [2024-05-15 02:52:57.508059] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:28:12.514 [2024-05-15 02:52:57.508092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.514 [2024-05-15 02:52:57.508104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32712 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:28:12.514 [2024-05-15 02:52:57.508115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.514 [2024-05-15 02:52:57.508125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32712 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:28:12.514 [2024-05-15 02:52:57.508135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.514 [2024-05-15 02:52:57.508145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32712 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:28:12.514 [2024-05-15 02:52:57.508155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.514 [2024-05-15 02:52:57.508164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32712 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:28:12.514 [2024-05-15 02:52:57.518666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.514 [2024-05-15 02:52:57.518687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:28:12.514 [2024-05-15 02:52:57.518716] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:28:12.514 [2024-05-15 02:52:57.518750] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.528757] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.538784] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.548809] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.558834] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.568860] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.578885] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.588910] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.598936] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.608960] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.618987] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.629012] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.639037] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.649063] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.659090] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.669114] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.679140] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.689166] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.699189] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.709214] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.719239] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.729264] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.739291] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.749315] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.759339] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.769365] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.779390] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.789416] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.799440] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.809467] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.819494] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.829521] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.839546] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.849571] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.859596] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.869623] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.514 [2024-05-15 02:52:57.879648] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:57.889672] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:57.899697] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:57.909723] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:57.919748] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:57.929773] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:57.939799] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:57.949824] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:57.959848] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:57.969872] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:57.979902] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:57.989927] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:57.999953] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.009977] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.020002] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.030026] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.040053] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.050078] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.060104] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.070129] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.080154] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.090180] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.100206] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.110233] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.120260] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.130285] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.140311] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.150336] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.160361] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.170388] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.183311] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.206315] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.216322] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.221051] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:12.515 [2024-05-15 02:52:58.226347] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.236371] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.246398] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.256425] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.266450] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.276476] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.286502] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.296526] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.306553] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.316578] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.326603] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.336630] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.346657] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.356683] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.366708] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.376736] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.386762] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.396787] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.406812] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.416838] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.426862] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.436888] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.446912] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.456936] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.466963] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.476990] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.487017] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.497042] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.507068] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.517095] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:12.515 [2024-05-15 02:52:58.521123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:235880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795a000 len:0x1000 key:0x1be200 00:28:12.515 [2024-05-15 02:52:58.521138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.515 [2024-05-15 02:52:58.521154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:235888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x1be200 00:28:12.515 [2024-05-15 02:52:58.521164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.515 [2024-05-15 02:52:58.521175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:235896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x1be200 00:28:12.515 [2024-05-15 02:52:58.521185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.515 [2024-05-15 02:52:58.521196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:235904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x1be200 00:28:12.515 [2024-05-15 02:52:58.521206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.515 [2024-05-15 02:52:58.521218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:235912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x1be200 00:28:12.515 [2024-05-15 02:52:58.521227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.515 [2024-05-15 02:52:58.521239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:235920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x1be200 00:28:12.515 [2024-05-15 02:52:58.521249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.515 [2024-05-15 02:52:58.521260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:235928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x1be200 00:28:12.515 [2024-05-15 02:52:58.521269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.515 [2024-05-15 02:52:58.521280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:235936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x1be200 00:28:12.515 [2024-05-15 02:52:58.521289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.515 [2024-05-15 02:52:58.521301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:235944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0x1be200 00:28:12.515 [2024-05-15 02:52:58.521310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.515 [2024-05-15 02:52:58.521321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:235952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0x1be200 00:28:12.515 [2024-05-15 02:52:58.521330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.515 [2024-05-15 02:52:58.521342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:235960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796e000 len:0x1000 key:0x1be200 00:28:12.515 [2024-05-15 02:52:58.521354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.515 [2024-05-15 02:52:58.521365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:235968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:235976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:235984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:235992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:236000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:236008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:236016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:236024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:236032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:236040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:236048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:236056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:236064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007988000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:236072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798a000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:236080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798c000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:236088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798e000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:236096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007990000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:236104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007992000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:236112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007994000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:236120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007996000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:236128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007998000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:236136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799a000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:236144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799c000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:236152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799e000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:236160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a0000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:236168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a2000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:236176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a4000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:236184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a6000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:236192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a8000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.521984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:236200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079aa000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.521993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.522004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:236208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ac000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.522014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.522026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:236216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ae000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.522035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.522046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:236224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b0000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.522056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.522071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:236232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b2000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.522080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.522092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:236240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b4000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.522101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.522112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:236248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b6000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.522121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.522132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:236256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b8000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.522142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.516 [2024-05-15 02:52:58.522153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:236264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ba000 len:0x1000 key:0x1be200 00:28:12.516 [2024-05-15 02:52:58.522163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:236272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079bc000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:236280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079be000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:236288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c0000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:236296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c2000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:236304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c4000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:236312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c6000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:236320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c8000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:236328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ca000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:236336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079cc000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:236344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ce000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:236352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d0000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:236360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d2000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:236368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d4000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:236376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d6000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:236384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d8000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:236392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079da000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:236400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079dc000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:236408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079de000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:236416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e0000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:236424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e2000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:236432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e4000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:236440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e6000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:236448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e8000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:236456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ea000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:236464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ec000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:236472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ee000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:236480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f0000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:236488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f2000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:236496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f4000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:236504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f6000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:236512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f8000 len:0x1000 key:0x1be200 00:28:12.517 [2024-05-15 02:52:58.522813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.517 [2024-05-15 02:52:58.522824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:236520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fa000 len:0x1000 key:0x1be200 00:28:12.518 [2024-05-15 02:52:58.522833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.522844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:236528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fc000 len:0x1000 key:0x1be200 00:28:12.518 [2024-05-15 02:52:58.522853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.522864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:236536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fe000 len:0x1000 key:0x1be200 00:28:12.518 [2024-05-15 02:52:58.522874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.522885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:236544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.522899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.522909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:236552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.522919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.522930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:236560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.522940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.522951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:236568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.522960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.522970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:236576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.522982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.522993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:236584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:236592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:236600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:236608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:236616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:236624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:236632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:236640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:236648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:236656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:236664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:236672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:236680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:236688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:236696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:236704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:236712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:236720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:236728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:236736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:236744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:236752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:236760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:236768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:236776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:236784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:236792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:236800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:236808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:236816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:236824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:236832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.518 [2024-05-15 02:52:58.523640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.518 [2024-05-15 02:52:58.523651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:236840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.519 [2024-05-15 02:52:58.523661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.519 [2024-05-15 02:52:58.523671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:236848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.519 [2024-05-15 02:52:58.523680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.519 [2024-05-15 02:52:58.523691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:236856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.519 [2024-05-15 02:52:58.523700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.519 [2024-05-15 02:52:58.523711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:236864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.519 [2024-05-15 02:52:58.523720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.519 [2024-05-15 02:52:58.523731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:236872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.519 [2024-05-15 02:52:58.523740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.519 [2024-05-15 02:52:58.523750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:236880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.519 [2024-05-15 02:52:58.523760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.519 [2024-05-15 02:52:58.523772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:236888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.519 [2024-05-15 02:52:58.523782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32712 cdw0:a6873200 sqhd:2530 p:0 m:0 dnr:0 00:28:12.519 [2024-05-15 02:52:58.536856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:12.519 [2024-05-15 02:52:58.536875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:12.519 [2024-05-15 02:52:58.536885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:236896 len:8 PRP1 0x0 PRP2 0x0 00:28:12.519 [2024-05-15 02:52:58.536901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.519 [2024-05-15 02:52:58.536953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:28:12.519 [2024-05-15 02:52:58.537211] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:28:12.519 [2024-05-15 02:52:58.537226] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.519 [2024-05-15 02:52:58.537236] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:28:12.519 [2024-05-15 02:52:58.537254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.519 [2024-05-15 02:52:58.537266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:28:12.519 [2024-05-15 02:52:58.537280] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:28:12.519 [2024-05-15 02:52:58.537290] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:28:12.519 [2024-05-15 02:52:58.537300] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:28:12.519 [2024-05-15 02:52:58.537322] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.519 [2024-05-15 02:52:58.537331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:28:12.519 [2024-05-15 02:52:59.540291] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:28:12.519 [2024-05-15 02:52:59.540322] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.519 [2024-05-15 02:52:59.540332] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:28:12.519 [2024-05-15 02:52:59.540352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.519 [2024-05-15 02:52:59.540363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:28:12.519 [2024-05-15 02:52:59.540377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:28:12.519 [2024-05-15 02:52:59.540387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:28:12.519 [2024-05-15 02:52:59.540397] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:28:12.519 [2024-05-15 02:52:59.540420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.519 [2024-05-15 02:52:59.540429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:28:12.519 [2024-05-15 02:53:00.542905] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:28:12.519 [2024-05-15 02:53:00.542953] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.519 [2024-05-15 02:53:00.542963] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:28:12.519 [2024-05-15 02:53:00.542986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.519 [2024-05-15 02:53:00.542997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:28:12.519 [2024-05-15 02:53:00.543010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:28:12.519 [2024-05-15 02:53:00.543020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:28:12.519 [2024-05-15 02:53:00.543032] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:28:12.519 [2024-05-15 02:53:00.543069] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.519 [2024-05-15 02:53:00.543082] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:28:12.519 [2024-05-15 02:53:02.548682] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.519 [2024-05-15 02:53:02.548723] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:28:12.519 [2024-05-15 02:53:02.548753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.519 [2024-05-15 02:53:02.548766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:28:12.519 [2024-05-15 02:53:02.548786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:28:12.519 [2024-05-15 02:53:02.548797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:28:12.519 [2024-05-15 02:53:02.548810] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:28:12.519 [2024-05-15 02:53:02.548844] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.519 [2024-05-15 02:53:02.548855] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:28:12.519 [2024-05-15 02:53:04.554660] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.519 [2024-05-15 02:53:04.554698] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:28:12.519 [2024-05-15 02:53:04.554728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.519 [2024-05-15 02:53:04.554740] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:28:12.519 [2024-05-15 02:53:04.554755] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:28:12.519 [2024-05-15 02:53:04.554765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:28:12.519 [2024-05-15 02:53:04.554777] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:28:12.519 [2024-05-15 02:53:04.554808] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.519 [2024-05-15 02:53:04.554819] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:28:12.519 [2024-05-15 02:53:06.559813] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.519 [2024-05-15 02:53:06.559851] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:28:12.519 [2024-05-15 02:53:06.559879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.519 [2024-05-15 02:53:06.559890] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:28:12.519 [2024-05-15 02:53:06.559918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:28:12.519 [2024-05-15 02:53:06.559928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:28:12.519 [2024-05-15 02:53:06.559938] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:28:12.519 [2024-05-15 02:53:06.559970] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.519 [2024-05-15 02:53:06.559979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:28:12.519 [2024-05-15 02:53:08.565227] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:28:12.519 [2024-05-15 02:53:08.565267] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:28:12.519 [2024-05-15 02:53:08.565298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:12.519 [2024-05-15 02:53:08.565310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:28:12.519 [2024-05-15 02:53:08.565337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:28:12.519 [2024-05-15 02:53:08.565347] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:28:12.519 [2024-05-15 02:53:08.565360] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:28:12.519 [2024-05-15 02:53:08.565392] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:12.519 [2024-05-15 02:53:08.565403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:28:12.519 [2024-05-15 02:53:09.619900] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:12.519 00:28:12.519 Latency(us) 00:28:12.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.519 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:12.519 Verification LBA range: start 0x0 length 0x8000 00:28:12.519 Nvme_mlx_0_0n1 : 90.01 8385.00 32.75 0.00 0.00 15247.68 1410.45 12079595.52 00:28:12.519 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:12.519 Verification LBA range: start 0x0 length 0x8000 00:28:12.519 Nvme_mlx_0_1n1 : 90.01 7174.23 28.02 0.00 0.00 17819.95 3504.75 13071639.60 00:28:12.519 =================================================================================================================== 00:28:12.520 Total : 15559.23 60.78 0.00 0.00 16433.76 1410.45 13071639.60 00:28:12.520 Received shutdown signal, test time was about 90.000000 seconds 00:28:12.520 00:28:12.520 Latency(us) 00:28:12.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.520 =================================================================================================================== 00:28:12.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@202 -- # killprocess 897287 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@947 -- # '[' -z 897287 ']' 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@951 -- # kill -0 897287 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # uname 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 897287 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 897287' 00:28:12.520 killing process with pid 897287 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@966 -- # kill 897287 00:28:12.520 [2024-05-15 02:54:12.654409] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:12.520 02:54:12 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@971 -- # wait 897287 00:28:12.520 [2024-05-15 02:54:12.696444] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@203 -- # nvmfpid= 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@205 -- # return 0 00:28:12.520 00:28:12.520 real 1m32.854s 00:28:12.520 user 4m23.615s 00:28:12.520 sys 0m6.009s 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:28:12.520 ************************************ 00:28:12.520 END TEST nvmf_device_removal_pci_remove_no_srq 00:28:12.520 ************************************ 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:28:12.520 ************************************ 00:28:12.520 START TEST nvmf_device_removal_pci_remove 00:28:12.520 ************************************ 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1122 -- # test_remove_and_rescan 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@481 -- # nvmfpid=909819 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@482 -- # waitforlisten 909819 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@828 -- # '[' -z 909819 ']' 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.520 [2024-05-15 02:54:13.147922] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:28:12.520 [2024-05-15 02:54:13.147985] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.520 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.520 [2024-05-15 02:54:13.257871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:12.520 [2024-05-15 02:54:13.308223] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.520 [2024-05-15 02:54:13.308272] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.520 [2024-05-15 02:54:13.308286] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.520 [2024-05-15 02:54:13.308299] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.520 [2024-05-15 02:54:13.308310] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.520 [2024-05-15 02:54:13.308375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.520 [2024-05-15 02:54:13.308380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@861 -- # return 0 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.520 [2024-05-15 02:54:13.491305] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbfd7a0/0xc01c90) succeed. 00:28:12.520 [2024-05-15 02:54:13.504803] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbfeca0/0xc43320) succeed. 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # get_rdma_if_list 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:28:12.520 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.521 [2024-05-15 02:54:13.720187] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:12.521 [2024-05-15 02:54:13.720585] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.521 [2024-05-15 02:54:13.806473] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@53 -- # return 0 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # local dev_names 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@91 -- # bdevperf_pid=910034 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@94 -- # waitforlisten 910034 /var/tmp/bdevperf.sock 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@828 -- # '[' -z 910034 ']' 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:12.521 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:12.522 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:12.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:12.522 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:12.522 02:54:13 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@861 -- # return 0 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.522 Nvme_mlx_0_0n1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:12.522 Nvme_mlx_0_1n1 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=910047 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@112 -- # sleep 5 00:28:12.522 02:54:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:16.720 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:28:16.720 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:28:16.720 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/infiniband 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:16.721 mlx5_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:28:16.721 02:54:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:28:16.721 [2024-05-15 02:54:19.458875] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:28:16.721 [2024-05-15 02:54:19.459350] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:28:16.721 [2024-05-15 02:54:19.462922] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:28:16.721 [2024-05-15 02:54:19.462953] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 126 00:28:16.721 [2024-05-15 02:54:19.463589] rdma_verbs.c: 83:spdk_rdma_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:28:21.999 02:54:25 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:28:22.937 [2024-05-15 02:54:26.120100] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xcdb110, err 11. Skip rescan. 00:28:23.196 02:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:28:23.196 02:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:28:23.196 02:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/net 00:28:23.196 02:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:28:23.196 02:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:28:23.196 02:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:28:23.196 02:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:28:23.196 02:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:28:23.196 02:54:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:28:23.455 [2024-05-15 02:54:26.540181] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcc9a80/0xc01c90) succeed. 00:28:23.455 [2024-05-15 02:54:26.540265] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:28:26.746 [2024-05-15 02:54:29.487490] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:26.746 [2024-05-15 02:54:29.487560] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:28:26.746 [2024-05-15 02:54:29.487601] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:28:26.746 [2024-05-15 02:54:29.487633] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/infiniband 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.746 mlx5_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:28:26.746 02:54:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:28:26.746 [2024-05-15 02:54:29.685080] rdma.c:3577:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:28:26.746 [2024-05-15 02:54:29.685174] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:28:26.746 [2024-05-15 02:54:29.691265] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:28:26.746 [2024-05-15 02:54:29.691290] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 66 00:28:32.017 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:28:32.017 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:28:32.017 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:28:32.017 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:28:32.017 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:32.017 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:28:32.017 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.017 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:32.274 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:28:32.274 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.274 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:28:32.274 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:28:32.274 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:28:32.274 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:28:32.274 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:28:32.274 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:28:32.274 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.274 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:32.274 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.274 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:28:32.275 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:28:32.275 02:54:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:28:33.209 [2024-05-15 02:54:36.344517] rdma.c:3266:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xbeb540, err 11. Skip rescan. 00:28:33.209 02:54:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:28:33.209 02:54:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:28:33.209 02:54:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/net 00:28:33.209 02:54:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:28:33.209 02:54:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:28:33.209 02:54:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:28:33.209 02:54:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:28:33.209 02:54:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:28:33.209 02:54:36 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:28:33.467 [2024-05-15 02:54:36.754650] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc00b00/0xc43320) succeed. 00:28:33.467 [2024-05-15 02:54:36.754749] rdma.c:3319:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:28:36.755 [2024-05-15 02:54:39.797628] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:28:36.755 [2024-05-15 02:54:39.797696] rdma.c:3325:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:28:36.755 [2024-05-15 02:54:39.797737] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:28:36.755 [2024-05-15 02:54:39.797772] rdma.c:3855:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@200 -- # stop_bdevperf 00:28:36.755 02:54:39 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@116 -- # wait 910047 00:29:44.468 0 00:29:44.468 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@118 -- # killprocess 910034 00:29:44.468 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@947 -- # '[' -z 910034 ']' 00:29:44.468 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@951 -- # kill -0 910034 00:29:44.469 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@952 -- # uname 00:29:44.469 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:44.469 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 910034 00:29:44.469 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:29:44.469 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:29:44.469 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@965 -- # echo 'killing process with pid 910034' 00:29:44.469 killing process with pid 910034 00:29:44.469 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@966 -- # kill 910034 00:29:44.469 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@971 -- # wait 910034 00:29:44.469 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@119 -- # bdevperf_pid= 00:29:44.469 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:29:44.469 [2024-05-15 02:54:13.866649] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:44.469 [2024-05-15 02:54:13.866725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910034 ] 00:29:44.469 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.469 [2024-05-15 02:54:13.949714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.469 [2024-05-15 02:54:13.990500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:44.469 Running I/O for 90 seconds... 00:29:44.469 [2024-05-15 02:54:19.461932] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:29:44.469 [2024-05-15 02:54:19.461976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.469 [2024-05-15 02:54:19.461989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32509 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:29:44.469 [2024-05-15 02:54:19.462002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.469 [2024-05-15 02:54:19.462012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32509 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:29:44.469 [2024-05-15 02:54:19.462023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.469 [2024-05-15 02:54:19.462033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32509 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:29:44.469 [2024-05-15 02:54:19.462043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.469 [2024-05-15 02:54:19.462052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32509 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:29:44.469 [2024-05-15 02:54:19.467720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.469 [2024-05-15 02:54:19.467752] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:29:44.469 [2024-05-15 02:54:19.467808] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:29:44.469 [2024-05-15 02:54:19.471922] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.481948] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.491970] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.502239] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.512264] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.522291] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.532316] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.542380] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.552436] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.563244] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.573268] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.583406] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.593905] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.603929] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.614460] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.624561] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.634673] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.645391] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.655418] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.665729] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.676120] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.686146] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.696681] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.706704] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.716731] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.727223] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.737247] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.747566] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.758040] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.768066] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.778593] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.788620] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.798645] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.809293] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.819318] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.829637] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.840056] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.850080] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.860609] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.870636] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.880662] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.891244] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.901269] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.911510] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.469 [2024-05-15 02:54:19.921938] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:19.931955] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:19.942570] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:19.952608] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:19.962786] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:19.973548] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:19.983575] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:19.994040] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.004532] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.014557] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.025086] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.035112] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.045280] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.055307] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.065333] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.075642] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.085698] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.095724] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.106263] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.116281] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.126481] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.136867] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.146891] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.157451] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.167476] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.177502] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.188042] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.198070] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.208241] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.218696] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.228717] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.239251] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.249276] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.259302] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.269922] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.279948] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.290146] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.300546] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.310573] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.321155] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.331205] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.341231] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.351740] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.361766] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.371974] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.382476] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.392502] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.403022] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.413047] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.423075] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.433613] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.443638] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.453833] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.464266] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.470 [2024-05-15 02:54:20.470219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x1810ef 00:29:44.470 [2024-05-15 02:54:20.470241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.470 [2024-05-15 02:54:20.470263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x1810ef 00:29:44.470 [2024-05-15 02:54:20.470278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.470 [2024-05-15 02:54:20.470290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x1810ef 00:29:44.470 [2024-05-15 02:54:20.470300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.470 [2024-05-15 02:54:20.470311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x1810ef 00:29:44.470 [2024-05-15 02:54:20.470321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.470 [2024-05-15 02:54:20.470332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x1810ef 00:29:44.470 [2024-05-15 02:54:20.470342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.470 [2024-05-15 02:54:20.470355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x1810ef 00:29:44.470 [2024-05-15 02:54:20.470364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.470 [2024-05-15 02:54:20.470375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x1810ef 00:29:44.470 [2024-05-15 02:54:20.470385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.470 [2024-05-15 02:54:20.470396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x1810ef 00:29:44.470 [2024-05-15 02:54:20.470405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.470 [2024-05-15 02:54:20.470417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.470971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.470989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.471003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.471013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.471024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.471035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.471046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.471056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.471067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.471077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.471089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.471102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.471113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.471122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.471134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.471143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.471 [2024-05-15 02:54:20.471155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x1810ef 00:29:44.471 [2024-05-15 02:54:20.471165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.472 [2024-05-15 02:54:20.471940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x1810ef 00:29:44.472 [2024-05-15 02:54:20.471951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.471963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.471972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.471985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.471995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x1810ef 00:29:44.473 [2024-05-15 02:54:20.472473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.473 [2024-05-15 02:54:20.472484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d6000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d8000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077da000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077dc000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077de000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e0000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e2000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e4000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e6000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.472918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.472929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x1810ef 00:29:44.474 [2024-05-15 02:54:20.481829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.495406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:44.474 [2024-05-15 02:54:20.495422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:44.474 [2024-05-15 02:54:20.495432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99328 len:8 PRP1 0x0 PRP2 0x0 00:29:44.474 [2024-05-15 02:54:20.495442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.474 [2024-05-15 02:54:20.497147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:29:44.474 [2024-05-15 02:54:20.497430] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:29:44.474 [2024-05-15 02:54:20.497448] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.474 [2024-05-15 02:54:20.497457] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:29:44.474 [2024-05-15 02:54:20.497477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.474 [2024-05-15 02:54:20.497488] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:29:44.474 [2024-05-15 02:54:20.497501] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:29:44.474 [2024-05-15 02:54:20.497511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:29:44.474 [2024-05-15 02:54:20.497523] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:29:44.474 [2024-05-15 02:54:20.497544] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.474 [2024-05-15 02:54:20.497553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:29:44.474 [2024-05-15 02:54:21.500012] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:29:44.474 [2024-05-15 02:54:21.500052] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.474 [2024-05-15 02:54:21.500061] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:29:44.474 [2024-05-15 02:54:21.500084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.474 [2024-05-15 02:54:21.500096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:29:44.474 [2024-05-15 02:54:21.500913] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:29:44.474 [2024-05-15 02:54:21.500932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:29:44.474 [2024-05-15 02:54:21.500943] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:29:44.474 [2024-05-15 02:54:21.500974] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.474 [2024-05-15 02:54:21.500986] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:29:44.474 [2024-05-15 02:54:22.503445] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:29:44.474 [2024-05-15 02:54:22.503484] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.474 [2024-05-15 02:54:22.503493] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:29:44.474 [2024-05-15 02:54:22.503515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.475 [2024-05-15 02:54:22.503526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:29:44.475 [2024-05-15 02:54:22.503539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:29:44.475 [2024-05-15 02:54:22.503549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:29:44.475 [2024-05-15 02:54:22.503560] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:29:44.475 [2024-05-15 02:54:22.503583] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.475 [2024-05-15 02:54:22.503593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:29:44.475 [2024-05-15 02:54:24.508432] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.475 [2024-05-15 02:54:24.508471] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:29:44.475 [2024-05-15 02:54:24.508499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.475 [2024-05-15 02:54:24.508510] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:29:44.475 [2024-05-15 02:54:24.509370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:29:44.475 [2024-05-15 02:54:24.509386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:29:44.475 [2024-05-15 02:54:24.509400] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:29:44.475 [2024-05-15 02:54:24.509427] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.475 [2024-05-15 02:54:24.509437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:29:44.475 [2024-05-15 02:54:26.514267] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.475 [2024-05-15 02:54:26.514299] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:29:44.475 [2024-05-15 02:54:26.514328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.475 [2024-05-15 02:54:26.514340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:29:44.475 [2024-05-15 02:54:26.514355] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:29:44.475 [2024-05-15 02:54:26.514365] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:29:44.475 [2024-05-15 02:54:26.514378] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:29:44.475 [2024-05-15 02:54:26.514403] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.475 [2024-05-15 02:54:26.514418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:29:44.475 [2024-05-15 02:54:28.519254] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.475 [2024-05-15 02:54:28.519292] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:29:44.475 [2024-05-15 02:54:28.519318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.475 [2024-05-15 02:54:28.519330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:29:44.475 [2024-05-15 02:54:28.519343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:29:44.475 [2024-05-15 02:54:28.519353] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:29:44.475 [2024-05-15 02:54:28.519366] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:29:44.475 [2024-05-15 02:54:28.519390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.475 [2024-05-15 02:54:28.519400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:29:44.475 [2024-05-15 02:54:29.585590] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:44.475 [2024-05-15 02:54:29.679222] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:29:44.475 [2024-05-15 02:54:29.679252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.475 [2024-05-15 02:54:29.679265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32509 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:29:44.475 [2024-05-15 02:54:29.679277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.475 [2024-05-15 02:54:29.679288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32509 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:29:44.475 [2024-05-15 02:54:29.679299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.475 [2024-05-15 02:54:29.679308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32509 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:29:44.475 [2024-05-15 02:54:29.679319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:44.475 [2024-05-15 02:54:29.679329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32509 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:29:44.475 [2024-05-15 02:54:29.684847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.475 [2024-05-15 02:54:29.684862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:29:44.475 [2024-05-15 02:54:29.684894] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:29:44.475 [2024-05-15 02:54:29.689229] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.699256] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.709281] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.719308] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.729335] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.739359] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.749384] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.759411] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.769437] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.779461] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.789486] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.799512] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.809535] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.819561] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.829588] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.839614] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.849639] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.859666] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.869693] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.879718] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.889744] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.899771] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.909797] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.919823] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.929851] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.939877] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.949903] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.959927] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.969952] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.980097] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:29.990122] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:30.000146] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:30.010174] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:30.020200] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:30.030227] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:30.040253] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:30.050278] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:30.060304] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:30.070331] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:30.080357] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:30.090386] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.475 [2024-05-15 02:54:30.100412] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.110437] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.120464] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.130491] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.140515] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.150542] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.160568] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.170594] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.180621] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.190646] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.200671] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.210698] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.220722] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.230749] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.240774] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.250800] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.260824] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.270850] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.280876] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.290904] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.300931] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.310956] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.320983] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.331009] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.341035] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.351060] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.361085] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.371111] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.381137] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.391164] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.401191] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.411218] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.421244] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.431268] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.441295] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.451322] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.461349] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.471374] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.481402] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.491427] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.501453] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.511479] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.521505] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.531533] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.541558] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.551908] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.561933] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.571960] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.582212] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.592236] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.602262] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.612950] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.622976] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.633586] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.643612] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.654200] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.664224] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.674797] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.684823] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:44.476 [2024-05-15 02:54:30.687446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:213104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1bf0ef 00:29:44.476 [2024-05-15 02:54:30.687466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.476 [2024-05-15 02:54:30.687484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:213112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1bf0ef 00:29:44.476 [2024-05-15 02:54:30.687494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.476 [2024-05-15 02:54:30.687506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:213120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1bf0ef 00:29:44.476 [2024-05-15 02:54:30.687517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.476 [2024-05-15 02:54:30.687529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:213128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1bf0ef 00:29:44.476 [2024-05-15 02:54:30.687539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.476 [2024-05-15 02:54:30.687551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:213136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1bf0ef 00:29:44.476 [2024-05-15 02:54:30.687562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:213144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:213152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:213160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:213168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:213176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:213184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:213192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:213200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:213208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:213216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:213224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:213232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:213240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793e000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:213248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007940000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:213256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007942000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:213264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007944000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:213272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007946000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:213280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007948000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.687974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.687989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:213288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794a000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:213296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794c000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:213304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794e000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:213312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007950000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:213320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007952000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:213328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007954000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:213336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007956000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:213344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007958000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:213352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795a000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:213360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:213368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:213376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:213384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:213392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:213400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.477 [2024-05-15 02:54:30.688331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:213408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x1bf0ef 00:29:44.477 [2024-05-15 02:54:30.688343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:213416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:213424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:213432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796e000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:213440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:213448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:213456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:213464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:213472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:213480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:213488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:213496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:213504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:213512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:213520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:213528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:213536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007988000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:213544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798a000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:213552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798c000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:213560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798e000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:213568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007990000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:213576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007992000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:213584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007994000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:213592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007996000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:213600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007998000 len:0x1000 key:0x1bf0ef 00:29:44.478 [2024-05-15 02:54:30.688853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.478 [2024-05-15 02:54:30.688865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:213608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799a000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.688875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.688887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:213616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799c000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.688901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.688913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:213624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799e000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.688925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.688938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:213632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a0000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.688948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.688960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:213640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a2000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.688970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.688982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:213648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a4000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.688992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:213656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a6000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:213664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a8000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:213672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079aa000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:213680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ac000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:213688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ae000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:213696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b0000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:213704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b2000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:213712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b4000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:213720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b6000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:213728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b8000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:213736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ba000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:213744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079bc000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:213752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079be000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:213760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c0000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:213768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c2000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:213776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c4000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:213784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c6000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:213792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c8000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:213800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ca000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:213808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079cc000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:213816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ce000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.479 [2024-05-15 02:54:30.689471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:213824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d0000 len:0x1000 key:0x1bf0ef 00:29:44.479 [2024-05-15 02:54:30.689481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.480 [2024-05-15 02:54:30.689493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:213832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d2000 len:0x1000 key:0x1bf0ef 00:29:44.480 [2024-05-15 02:54:30.689502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.480 [2024-05-15 02:54:30.689513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:213840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d4000 len:0x1000 key:0x1bf0ef 00:29:44.480 [2024-05-15 02:54:30.689522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.480 [2024-05-15 02:54:30.689533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:213848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d6000 len:0x1000 key:0x1bf0ef 00:29:44.480 [2024-05-15 02:54:30.689543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32509 cdw0:a9aad460 sqhd:e530 p:0 m:0 dnr:0 00:29:44.480 [2024-05-15 02:54:30.703042] rdma_verbs.c: 83:spdk_rdma_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:29:44.480 [2024-05-15 02:54:30.703114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:44.480 [2024-05-15 02:54:30.703124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:44.480 [2024-05-15 02:54:30.703134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:213856 len:8 PRP1 0x0 PRP2 0x0 00:29:44.480 [2024-05-15 02:54:30.703144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:44.480 [2024-05-15 02:54:30.703358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:29:44.480 [2024-05-15 02:54:30.705498] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:29:44.480 [2024-05-15 02:54:30.705519] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.480 [2024-05-15 02:54:30.705528] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:29:44.480 [2024-05-15 02:54:30.705547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.480 [2024-05-15 02:54:30.705559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:29:44.480 [2024-05-15 02:54:30.705588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:29:44.480 [2024-05-15 02:54:30.705599] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:29:44.480 [2024-05-15 02:54:30.705613] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:29:44.480 [2024-05-15 02:54:30.705637] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.480 [2024-05-15 02:54:30.705647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:29:44.480 [2024-05-15 02:54:31.708110] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:29:44.480 [2024-05-15 02:54:31.708151] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.480 [2024-05-15 02:54:31.708160] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:29:44.480 [2024-05-15 02:54:31.708183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.480 [2024-05-15 02:54:31.708194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:29:44.480 [2024-05-15 02:54:31.708550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:29:44.480 [2024-05-15 02:54:31.708563] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:29:44.480 [2024-05-15 02:54:31.708576] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:29:44.480 [2024-05-15 02:54:31.708601] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.480 [2024-05-15 02:54:31.708610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:29:44.480 [2024-05-15 02:54:32.711076] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:29:44.480 [2024-05-15 02:54:32.711115] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.480 [2024-05-15 02:54:32.711124] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:29:44.480 [2024-05-15 02:54:32.711147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.480 [2024-05-15 02:54:32.711159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:29:44.480 [2024-05-15 02:54:32.711172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:29:44.480 [2024-05-15 02:54:32.711182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:29:44.480 [2024-05-15 02:54:32.711193] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:29:44.480 [2024-05-15 02:54:32.711219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.480 [2024-05-15 02:54:32.711229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:29:44.480 [2024-05-15 02:54:34.716372] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.480 [2024-05-15 02:54:34.716416] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:29:44.480 [2024-05-15 02:54:34.716444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.480 [2024-05-15 02:54:34.716456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:29:44.480 [2024-05-15 02:54:34.716477] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:29:44.480 [2024-05-15 02:54:34.716487] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:29:44.480 [2024-05-15 02:54:34.716504] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:29:44.480 [2024-05-15 02:54:34.716538] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.480 [2024-05-15 02:54:34.716549] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:29:44.480 [2024-05-15 02:54:36.722352] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.480 [2024-05-15 02:54:36.722393] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:29:44.480 [2024-05-15 02:54:36.722425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.480 [2024-05-15 02:54:36.722436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:29:44.480 [2024-05-15 02:54:36.722451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:29:44.480 [2024-05-15 02:54:36.722461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:29:44.480 [2024-05-15 02:54:36.722474] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:29:44.480 [2024-05-15 02:54:36.722507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.480 [2024-05-15 02:54:36.722517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:29:44.480 [2024-05-15 02:54:38.727362] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.480 [2024-05-15 02:54:38.727404] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:29:44.480 [2024-05-15 02:54:38.727432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.480 [2024-05-15 02:54:38.727443] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:29:44.480 [2024-05-15 02:54:38.727457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:29:44.480 [2024-05-15 02:54:38.727466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:29:44.480 [2024-05-15 02:54:38.727479] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:29:44.480 [2024-05-15 02:54:38.727504] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.480 [2024-05-15 02:54:38.727513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:29:44.480 [2024-05-15 02:54:40.732615] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:29:44.480 [2024-05-15 02:54:40.732654] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:29:44.480 [2024-05-15 02:54:40.732685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:44.480 [2024-05-15 02:54:40.732697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:29:44.480 [2024-05-15 02:54:40.732710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:29:44.480 [2024-05-15 02:54:40.732721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:29:44.480 [2024-05-15 02:54:40.732732] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:29:44.480 [2024-05-15 02:54:40.732757] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.480 [2024-05-15 02:54:40.732768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:29:44.480 [2024-05-15 02:54:41.785808] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:44.480 00:29:44.480 Latency(us) 00:29:44.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.480 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:44.480 Verification LBA range: start 0x0 length 0x8000 00:29:44.480 Nvme_mlx_0_0n1 : 90.01 8413.66 32.87 0.00 0.00 15193.90 3405.02 11087551.44 00:29:44.480 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:44.480 Verification LBA range: start 0x0 length 0x8000 00:29:44.480 Nvme_mlx_0_1n1 : 90.02 7081.62 27.66 0.00 0.00 18055.17 2607.19 13071639.60 00:29:44.480 =================================================================================================================== 00:29:44.481 Total : 15495.28 60.53 0.00 0.00 16501.58 2607.19 13071639.60 00:29:44.481 Received shutdown signal, test time was about 90.000000 seconds 00:29:44.481 00:29:44.481 Latency(us) 00:29:44.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.481 =================================================================================================================== 00:29:44.481 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@202 -- # killprocess 909819 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@947 -- # '[' -z 909819 ']' 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@951 -- # kill -0 909819 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@952 -- # uname 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 909819 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@965 -- # echo 'killing process with pid 909819' 00:29:44.481 killing process with pid 909819 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@966 -- # kill 909819 00:29:44.481 [2024-05-15 02:55:44.977746] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:44.481 02:55:44 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@971 -- # wait 909819 00:29:44.481 [2024-05-15 02:55:45.046782] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@203 -- # nvmfpid= 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@205 -- # return 0 00:29:44.481 00:29:44.481 real 1m32.259s 00:29:44.481 user 4m21.731s 00:29:44.481 sys 0m5.848s 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:29:44.481 ************************************ 00:29:44.481 END TEST nvmf_device_removal_pci_remove 00:29:44.481 ************************************ 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@317 -- # nvmftestfini 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@117 -- # sync 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@120 -- # set +e 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:44.481 rmmod nvme_rdma 00:29:44.481 rmmod nvme_fabrics 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@124 -- # set -e 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@125 -- # return 0 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@318 -- # clean_bond_device 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # ip link 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # grep bond_nvmf 00:29:44.481 00:29:44.481 real 3m12.401s 00:29:44.481 user 8m47.461s 00:29:44.481 sys 0m17.247s 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:44.481 02:55:45 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:29:44.481 ************************************ 00:29:44.481 END TEST nvmf_device_removal 00:29:44.481 ************************************ 00:29:44.481 02:55:45 nvmf_rdma -- nvmf/nvmf.sh@80 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:29:44.481 02:55:45 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:44.481 02:55:45 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:44.481 02:55:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:44.481 ************************************ 00:29:44.481 START TEST nvmf_srq_overwhelm 00:29:44.481 ************************************ 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:29:44.481 * Looking for test storage... 00:29:44.481 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.481 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:29:44.482 02:55:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:48.681 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:29:48.682 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:29:48.682 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:29:48.682 Found net devices under 0000:18:00.0: mlx_0_0 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:29:48.682 Found net devices under 0000:18:00.1: mlx_0_1 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:29:48.682 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:48.682 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:29:48.682 altname enp24s0f0np0 00:29:48.682 altname ens785f0np0 00:29:48.682 inet 192.168.100.8/24 scope global mlx_0_0 00:29:48.682 valid_lft forever preferred_lft forever 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:48.682 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:29:48.682 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:48.683 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:29:48.683 altname enp24s0f1np1 00:29:48.683 altname ens785f1np1 00:29:48.683 inet 192.168.100.9/24 scope global mlx_0_1 00:29:48.683 valid_lft forever preferred_lft forever 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:48.683 192.168.100.9' 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:48.683 192.168.100.9' 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:48.683 192.168.100.9' 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:29:48.683 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:29:48.943 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:48.943 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:48.943 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:48.943 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:48.943 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:48.943 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:48.943 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:29:48.943 02:55:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:48.943 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:48.943 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:48.943 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=924614 00:29:48.943 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:48.943 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 924614 00:29:48.943 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@828 -- # '[' -z 924614 ']' 00:29:48.943 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.943 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:48.943 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.943 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:48.943 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:48.943 [2024-05-15 02:55:52.061692] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:48.943 [2024-05-15 02:55:52.061767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.943 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.944 [2024-05-15 02:55:52.170165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.944 [2024-05-15 02:55:52.224218] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.944 [2024-05-15 02:55:52.224269] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.944 [2024-05-15 02:55:52.224287] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.944 [2024-05-15 02:55:52.224300] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.944 [2024-05-15 02:55:52.224311] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.944 [2024-05-15 02:55:52.227921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.944 [2024-05-15 02:55:52.228186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.944 [2024-05-15 02:55:52.228306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.944 [2024-05-15 02:55:52.228307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.880 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:49.880 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@861 -- # return 0 00:29:49.880 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:49.880 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:49.880 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:49.880 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.880 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:29:49.880 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.880 02:55:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:49.880 [2024-05-15 02:55:52.945197] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x206bd70/0x2070260) succeed. 00:29:49.880 [2024-05-15 02:55:52.961101] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x206d3b0/0x20b18f0) succeed. 00:29:49.880 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.880 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:49.881 Malloc0 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:49.881 [2024-05-15 02:55:53.076936] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:49.881 [2024-05-15 02:55:53.077367] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.881 02:55:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:29:50.814 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:29:50.814 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # local i=0 00:29:50.814 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # lsblk -l -o NAME 00:29:50.814 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # grep -q -w nvme0n1 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # lsblk -l -o NAME 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # grep -q -w nvme0n1 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1243 -- # return 0 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:51.073 Malloc1 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.073 02:55:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # local i=0 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # lsblk -l -o NAME 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # grep -q -w nvme1n1 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # lsblk -l -o NAME 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # grep -q -w nvme1n1 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1243 -- # return 0 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:52.009 Malloc2 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:52.009 02:55:55 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:29:52.944 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:29:52.944 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # local i=0 00:29:52.944 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # lsblk -l -o NAME 00:29:52.944 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # grep -q -w nvme2n1 00:29:52.944 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # lsblk -l -o NAME 00:29:52.944 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # grep -q -w nvme2n1 00:29:52.944 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1243 -- # return 0 00:29:52.944 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:52.944 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:29:52.944 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:52.944 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:53.203 Malloc3 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:53.203 02:55:56 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # local i=0 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # lsblk -l -o NAME 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # grep -q -w nvme3n1 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # lsblk -l -o NAME 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # grep -q -w nvme3n1 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1243 -- # return 0 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:54.140 Malloc4 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:54.140 02:55:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # local i=0 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # lsblk -l -o NAME 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # grep -q -w nvme4n1 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # lsblk -l -o NAME 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # grep -q -w nvme4n1 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1243 -- # return 0 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:55.076 Malloc5 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:29:55.076 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.077 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:55.077 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.077 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:29:55.077 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.077 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:29:55.335 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.335 02:55:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:29:56.273 02:55:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:29:56.273 02:55:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # local i=0 00:29:56.273 02:55:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # lsblk -l -o NAME 00:29:56.273 02:55:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1233 -- # grep -q -w nvme5n1 00:29:56.273 02:55:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # lsblk -l -o NAME 00:29:56.273 02:55:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # grep -q -w nvme5n1 00:29:56.273 02:55:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1243 -- # return 0 00:29:56.273 02:55:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:29:56.273 [global] 00:29:56.273 thread=1 00:29:56.273 invalidate=1 00:29:56.273 rw=read 00:29:56.273 time_based=1 00:29:56.273 runtime=10 00:29:56.273 ioengine=libaio 00:29:56.273 direct=1 00:29:56.273 bs=1048576 00:29:56.273 iodepth=128 00:29:56.273 norandommap=1 00:29:56.273 numjobs=13 00:29:56.273 00:29:56.273 [job0] 00:29:56.273 filename=/dev/nvme0n1 00:29:56.273 [job1] 00:29:56.274 filename=/dev/nvme1n1 00:29:56.274 [job2] 00:29:56.274 filename=/dev/nvme2n1 00:29:56.274 [job3] 00:29:56.274 filename=/dev/nvme3n1 00:29:56.274 [job4] 00:29:56.274 filename=/dev/nvme4n1 00:29:56.274 [job5] 00:29:56.274 filename=/dev/nvme5n1 00:29:56.274 Could not set queue depth (nvme0n1) 00:29:56.274 Could not set queue depth (nvme1n1) 00:29:56.274 Could not set queue depth (nvme2n1) 00:29:56.274 Could not set queue depth (nvme3n1) 00:29:56.274 Could not set queue depth (nvme4n1) 00:29:56.274 Could not set queue depth (nvme5n1) 00:29:56.533 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:56.533 ... 00:29:56.533 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:56.533 ... 00:29:56.533 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:56.533 ... 00:29:56.533 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:56.533 ... 00:29:56.533 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:56.533 ... 00:29:56.533 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:29:56.533 ... 00:29:56.533 fio-3.35 00:29:56.533 Starting 78 threads 00:30:11.482 00:30:11.483 job0: (groupid=0, jobs=1): err= 0: pid=925862: Wed May 15 02:56:13 2024 00:30:11.483 read: IOPS=17, BW=17.3MiB/s (18.1MB/s)(187MiB/10830msec) 00:30:11.483 slat (usec): min=101, max=2128.2k, avg=57523.32, stdev=301152.19 00:30:11.483 clat (msec): min=71, max=9997, avg=6769.36, stdev=3433.39 00:30:11.483 lat (msec): min=1560, max=10020, avg=6826.89, stdev=3401.04 00:30:11.483 clat percentiles (msec): 00:30:11.483 | 1.00th=[ 1552], 5.00th=[ 1620], 10.00th=[ 1670], 20.00th=[ 1787], 00:30:11.483 | 30.00th=[ 3775], 40.00th=[ 8658], 50.00th=[ 8926], 60.00th=[ 9060], 00:30:11.483 | 70.00th=[ 9194], 80.00th=[ 9463], 90.00th=[ 9731], 95.00th=[ 9866], 00:30:11.483 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:30:11.483 | 99.99th=[10000] 00:30:11.483 bw ( KiB/s): min= 2048, max=57344, per=0.98%, avg=24159.80, stdev=23416.32, samples=5 00:30:11.483 iops : min= 2, max= 56, avg=23.40, stdev=22.95, samples=5 00:30:11.483 lat (msec) : 100=0.53%, 2000=25.13%, >=2000=74.33% 00:30:11.483 cpu : usr=0.01%, sys=1.25%, ctx=350, majf=0, minf=32769 00:30:11.483 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.3%, 16=8.6%, 32=17.1%, >=64=66.3% 00:30:11.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.483 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:30:11.483 issued rwts: total=187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.483 job0: (groupid=0, jobs=1): err= 0: pid=925863: Wed May 15 02:56:13 2024 00:30:11.483 read: IOPS=4, BW=4274KiB/s (4377kB/s)(54.0MiB/12937msec) 00:30:11.483 slat (usec): min=879, max=2144.2k, avg=199052.76, stdev=607298.25 00:30:11.483 clat (msec): min=2187, max=12935, avg=11168.67, stdev=2659.38 00:30:11.483 lat (msec): min=4242, max=12936, avg=11367.72, stdev=2359.84 00:30:11.483 clat percentiles (msec): 00:30:11.483 | 1.00th=[ 2198], 5.00th=[ 6409], 10.00th=[ 6409], 20.00th=[ 8557], 00:30:11.483 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:30:11.483 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.483 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.483 | 99.99th=[12953] 00:30:11.483 lat (msec) : >=2000=100.00% 00:30:11.483 cpu : usr=0.00%, sys=0.40%, ctx=60, majf=0, minf=13825 00:30:11.483 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:30:11.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.483 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.483 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.483 job0: (groupid=0, jobs=1): err= 0: pid=925864: Wed May 15 02:56:13 2024 00:30:11.483 read: IOPS=2, BW=2856KiB/s (2924kB/s)(36.0MiB/12909msec) 00:30:11.483 slat (usec): min=648, max=2193.9k, avg=298397.35, stdev=730945.92 00:30:11.483 clat (msec): min=2166, max=12802, avg=11791.15, stdev=2487.67 00:30:11.483 lat (msec): min=4246, max=12908, avg=12089.55, stdev=1867.07 00:30:11.483 clat percentiles (msec): 00:30:11.483 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 8658], 20.00th=[12684], 00:30:11.483 | 30.00th=[12684], 40.00th=[12684], 50.00th=[12684], 60.00th=[12818], 00:30:11.483 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.483 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.483 | 99.99th=[12818] 00:30:11.483 lat (msec) : >=2000=100.00% 00:30:11.483 cpu : usr=0.00%, sys=0.24%, ctx=29, majf=0, minf=9217 00:30:11.483 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:30:11.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.483 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.483 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.483 job0: (groupid=0, jobs=1): err= 0: pid=925865: Wed May 15 02:56:13 2024 00:30:11.483 read: IOPS=116, BW=116MiB/s (122MB/s)(1508MiB/12948msec) 00:30:11.483 slat (usec): min=52, max=4310.5k, avg=7162.16, stdev=111351.85 00:30:11.483 clat (msec): min=283, max=7009, avg=1055.01, stdev=1733.08 00:30:11.483 lat (msec): min=283, max=7012, avg=1062.17, stdev=1738.52 00:30:11.483 clat percentiles (msec): 00:30:11.483 | 1.00th=[ 309], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 388], 00:30:11.483 | 30.00th=[ 426], 40.00th=[ 472], 50.00th=[ 518], 60.00th=[ 617], 00:30:11.483 | 70.00th=[ 693], 80.00th=[ 735], 90.00th=[ 852], 95.00th=[ 6678], 00:30:11.483 | 99.00th=[ 6946], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:30:11.483 | 99.99th=[ 7013] 00:30:11.483 bw ( KiB/s): min= 2052, max=366592, per=8.84%, avg=217519.08, stdev=94226.07, samples=13 00:30:11.483 iops : min= 2, max= 358, avg=212.38, stdev=92.00, samples=13 00:30:11.483 lat (msec) : 500=46.09%, 750=37.73%, 1000=7.69%, >=2000=8.49% 00:30:11.483 cpu : usr=0.06%, sys=1.83%, ctx=1702, majf=0, minf=32769 00:30:11.483 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:30:11.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.483 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.483 issued rwts: total=1508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.483 job0: (groupid=0, jobs=1): err= 0: pid=925866: Wed May 15 02:56:13 2024 00:30:11.483 read: IOPS=4, BW=4521KiB/s (4630kB/s)(57.0MiB/12910msec) 00:30:11.483 slat (usec): min=613, max=4275.2k, avg=188318.56, stdev=712387.01 00:30:11.483 clat (msec): min=2175, max=12785, avg=12131.37, stdev=1926.45 00:30:11.483 lat (msec): min=4260, max=12909, avg=12319.69, stdev=1384.14 00:30:11.483 clat percentiles (msec): 00:30:11.483 | 1.00th=[ 2165], 5.00th=[ 6409], 10.00th=[12416], 20.00th=[12550], 00:30:11.483 | 30.00th=[12550], 40.00th=[12550], 50.00th=[12550], 60.00th=[12684], 00:30:11.483 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12818], 00:30:11.483 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.483 | 99.99th=[12818] 00:30:11.483 lat (msec) : >=2000=100.00% 00:30:11.483 cpu : usr=0.00%, sys=0.41%, ctx=96, majf=0, minf=14593 00:30:11.483 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:30:11.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.483 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.483 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.483 job0: (groupid=0, jobs=1): err= 0: pid=925867: Wed May 15 02:56:13 2024 00:30:11.483 read: IOPS=1, BW=1188KiB/s (1217kB/s)(15.0MiB/12925msec) 00:30:11.483 slat (usec): min=849, max=4212.4k, avg=715826.08, stdev=1288816.15 00:30:11.483 clat (msec): min=2186, max=12922, avg=10858.13, stdev=3179.16 00:30:11.483 lat (msec): min=6399, max=12924, avg=11573.95, stdev=2119.45 00:30:11.483 clat percentiles (msec): 00:30:11.483 | 1.00th=[ 2198], 5.00th=[ 2198], 10.00th=[ 6409], 20.00th=[ 8557], 00:30:11.483 | 30.00th=[10671], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:30:11.483 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.483 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.483 | 99.99th=[12953] 00:30:11.483 lat (msec) : >=2000=100.00% 00:30:11.483 cpu : usr=0.00%, sys=0.11%, ctx=34, majf=0, minf=3841 00:30:11.483 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.483 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.483 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.483 job0: (groupid=0, jobs=1): err= 0: pid=925868: Wed May 15 02:56:13 2024 00:30:11.483 read: IOPS=5, BW=5131KiB/s (5254kB/s)(65.0MiB/12973msec) 00:30:11.483 slat (usec): min=815, max=2144.0k, avg=165937.94, stdev=557595.77 00:30:11.483 clat (msec): min=2186, max=12971, avg=11543.11, stdev=2522.30 00:30:11.483 lat (msec): min=4245, max=12972, avg=11709.05, stdev=2235.62 00:30:11.483 clat percentiles (msec): 00:30:11.483 | 1.00th=[ 2198], 5.00th=[ 6409], 10.00th=[ 8557], 20.00th=[ 8658], 00:30:11.483 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:30:11.483 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.483 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.483 | 99.99th=[12953] 00:30:11.483 lat (msec) : >=2000=100.00% 00:30:11.483 cpu : usr=0.00%, sys=0.48%, ctx=88, majf=0, minf=16641 00:30:11.483 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:30:11.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.483 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.483 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.483 job0: (groupid=0, jobs=1): err= 0: pid=925869: Wed May 15 02:56:13 2024 00:30:11.483 read: IOPS=23, BW=23.6MiB/s (24.8MB/s)(306MiB/12960msec) 00:30:11.483 slat (usec): min=63, max=2164.0k, avg=35198.41, stdev=230030.51 00:30:11.483 clat (msec): min=774, max=12793, avg=4166.79, stdev=1615.83 00:30:11.483 lat (msec): min=775, max=12913, avg=4201.99, stdev=1677.07 00:30:11.484 clat percentiles (msec): 00:30:11.484 | 1.00th=[ 827], 5.00th=[ 2039], 10.00th=[ 2635], 20.00th=[ 3675], 00:30:11.484 | 30.00th=[ 3842], 40.00th=[ 4044], 50.00th=[ 4144], 60.00th=[ 4144], 00:30:11.484 | 70.00th=[ 4212], 80.00th=[ 4245], 90.00th=[ 6409], 95.00th=[ 7752], 00:30:11.484 | 99.00th=[10805], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.484 | 99.99th=[12818] 00:30:11.484 bw ( KiB/s): min= 1412, max=96063, per=2.48%, avg=60929.00, stdev=37480.02, samples=6 00:30:11.484 iops : min= 1, max= 93, avg=59.17, stdev=36.43, samples=6 00:30:11.484 lat (msec) : 1000=2.94%, >=2000=97.06% 00:30:11.484 cpu : usr=0.02%, sys=0.90%, ctx=205, majf=0, minf=32769 00:30:11.484 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.5%, >=64=79.4% 00:30:11.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.484 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:30:11.484 issued rwts: total=306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.484 job0: (groupid=0, jobs=1): err= 0: pid=925870: Wed May 15 02:56:13 2024 00:30:11.484 read: IOPS=127, BW=128MiB/s (134MB/s)(1649MiB/12899msec) 00:30:11.484 slat (usec): min=54, max=2145.9k, avg=6529.19, stdev=90818.63 00:30:11.484 clat (msec): min=134, max=8837, avg=960.63, stdev=2240.51 00:30:11.484 lat (msec): min=134, max=8837, avg=967.16, stdev=2248.10 00:30:11.484 clat percentiles (msec): 00:30:11.484 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 165], 20.00th=[ 167], 00:30:11.484 | 30.00th=[ 171], 40.00th=[ 184], 50.00th=[ 230], 60.00th=[ 347], 00:30:11.484 | 70.00th=[ 493], 80.00th=[ 558], 90.00th=[ 651], 95.00th=[ 8792], 00:30:11.484 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:30:11.484 | 99.99th=[ 8792] 00:30:11.484 bw ( KiB/s): min= 2048, max=770048, per=11.52%, avg=283369.09, stdev=254583.79, samples=11 00:30:11.484 iops : min= 2, max= 752, avg=276.73, stdev=248.62, samples=11 00:30:11.484 lat (msec) : 250=52.27%, 500=18.50%, 750=21.29%, >=2000=7.94% 00:30:11.484 cpu : usr=0.02%, sys=1.79%, ctx=1821, majf=0, minf=32769 00:30:11.484 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:30:11.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.484 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.484 issued rwts: total=1649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.484 job0: (groupid=0, jobs=1): err= 0: pid=925871: Wed May 15 02:56:13 2024 00:30:11.484 read: IOPS=36, BW=36.3MiB/s (38.1MB/s)(472MiB/13002msec) 00:30:11.484 slat (usec): min=57, max=2103.2k, avg=22836.21, stdev=189244.03 00:30:11.484 clat (msec): min=230, max=11101, avg=3333.52, stdev=4435.12 00:30:11.484 lat (msec): min=231, max=11103, avg=3356.36, stdev=4446.94 00:30:11.484 clat percentiles (msec): 00:30:11.484 | 1.00th=[ 232], 5.00th=[ 266], 10.00th=[ 288], 20.00th=[ 330], 00:30:11.484 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 489], 60.00th=[ 1385], 00:30:11.484 | 70.00th=[ 2567], 80.00th=[10805], 90.00th=[10939], 95.00th=[11073], 00:30:11.484 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:30:11.484 | 99.99th=[11073] 00:30:11.484 bw ( KiB/s): min= 2048, max=391168, per=4.10%, avg=100933.00, stdev=141708.74, samples=7 00:30:11.484 iops : min= 2, max= 382, avg=98.43, stdev=138.49, samples=7 00:30:11.484 lat (msec) : 250=2.12%, 500=48.31%, 750=3.18%, 1000=2.97%, 2000=12.50% 00:30:11.484 lat (msec) : >=2000=30.93% 00:30:11.484 cpu : usr=0.00%, sys=1.18%, ctx=585, majf=0, minf=32206 00:30:11.484 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:30:11.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.484 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:30:11.484 issued rwts: total=472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.484 job0: (groupid=0, jobs=1): err= 0: pid=925872: Wed May 15 02:56:13 2024 00:30:11.484 read: IOPS=9, BW=9.97MiB/s (10.5MB/s)(129MiB/12934msec) 00:30:11.484 slat (usec): min=641, max=6369.2k, avg=83488.18, stdev=587695.45 00:30:11.484 clat (msec): min=2162, max=12842, avg=11465.08, stdev=1278.40 00:30:11.484 lat (msec): min=4401, max=12859, avg=11548.56, stdev=981.55 00:30:11.484 clat percentiles (msec): 00:30:11.484 | 1.00th=[ 4396], 5.00th=[10671], 10.00th=[10805], 20.00th=[11073], 00:30:11.484 | 30.00th=[11208], 40.00th=[11342], 50.00th=[11610], 60.00th=[11745], 00:30:11.484 | 70.00th=[12013], 80.00th=[12281], 90.00th=[12550], 95.00th=[12684], 00:30:11.484 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.484 | 99.99th=[12818] 00:30:11.484 bw ( KiB/s): min= 2048, max= 2052, per=0.08%, avg=2050.00, stdev= 2.83, samples=2 00:30:11.484 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=2 00:30:11.484 lat (msec) : >=2000=100.00% 00:30:11.484 cpu : usr=0.02%, sys=0.86%, ctx=310, majf=0, minf=32769 00:30:11.484 IO depths : 1=0.8%, 2=1.6%, 4=3.1%, 8=6.2%, 16=12.4%, 32=24.8%, >=64=51.2% 00:30:11.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.484 complete : 0=0.0%, 4=66.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=33.3% 00:30:11.484 issued rwts: total=129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.484 job0: (groupid=0, jobs=1): err= 0: pid=925873: Wed May 15 02:56:13 2024 00:30:11.484 read: IOPS=5, BW=5315KiB/s (5443kB/s)(56.0MiB/10789msec) 00:30:11.484 slat (usec): min=507, max=2088.6k, avg=191198.26, stdev=585433.43 00:30:11.484 clat (msec): min=80, max=10786, avg=6721.27, stdev=3631.03 00:30:11.484 lat (msec): min=2078, max=10787, avg=6912.47, stdev=3556.13 00:30:11.484 clat percentiles (msec): 00:30:11.484 | 1.00th=[ 81], 5.00th=[ 2089], 10.00th=[ 2089], 20.00th=[ 2123], 00:30:11.484 | 30.00th=[ 4245], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 8557], 00:30:11.484 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10805], 00:30:11.484 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:30:11.484 | 99.99th=[10805] 00:30:11.484 lat (msec) : 100=1.79%, >=2000=98.21% 00:30:11.484 cpu : usr=0.00%, sys=0.46%, ctx=56, majf=0, minf=14337 00:30:11.484 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:30:11.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.484 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.484 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.484 job0: (groupid=0, jobs=1): err= 0: pid=925874: Wed May 15 02:56:13 2024 00:30:11.484 read: IOPS=2, BW=2149KiB/s (2201kB/s)(27.0MiB/12864msec) 00:30:11.484 slat (usec): min=894, max=4286.9k, avg=396334.37, stdev=1013743.07 00:30:11.484 clat (msec): min=2162, max=12861, avg=11274.38, stdev=2551.73 00:30:11.484 lat (msec): min=6449, max=12863, avg=11670.72, stdev=1803.33 00:30:11.484 clat percentiles (msec): 00:30:11.484 | 1.00th=[ 2165], 5.00th=[ 6477], 10.00th=[ 8557], 20.00th=[10671], 00:30:11.484 | 30.00th=[10805], 40.00th=[10805], 50.00th=[12818], 60.00th=[12818], 00:30:11.484 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.484 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.484 | 99.99th=[12818] 00:30:11.484 lat (msec) : >=2000=100.00% 00:30:11.484 cpu : usr=0.01%, sys=0.19%, ctx=24, majf=0, minf=6913 00:30:11.484 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:30:11.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.484 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:30:11.484 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.484 job1: (groupid=0, jobs=1): err= 0: pid=925875: Wed May 15 02:56:13 2024 00:30:11.484 read: IOPS=13, BW=13.5MiB/s (14.1MB/s)(173MiB/12831msec) 00:30:11.484 slat (usec): min=431, max=6387.0k, avg=62308.18, stdev=513097.33 00:30:11.484 clat (msec): min=1061, max=9664, avg=3616.22, stdev=2144.45 00:30:11.484 lat (msec): min=1088, max=12792, avg=3678.53, stdev=2249.73 00:30:11.484 clat percentiles (msec): 00:30:11.484 | 1.00th=[ 1083], 5.00th=[ 1099], 10.00th=[ 1133], 20.00th=[ 2937], 00:30:11.484 | 30.00th=[ 3071], 40.00th=[ 3205], 50.00th=[ 3306], 60.00th=[ 3473], 00:30:11.484 | 70.00th=[ 3608], 80.00th=[ 3775], 90.00th=[ 6409], 95.00th=[ 9597], 00:30:11.484 | 99.00th=[ 9597], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:30:11.484 | 99.99th=[ 9731] 00:30:11.484 bw ( KiB/s): min= 2052, max=90530, per=1.88%, avg=46291.00, stdev=62563.39, samples=2 00:30:11.484 iops : min= 2, max= 88, avg=45.00, stdev=60.81, samples=2 00:30:11.484 lat (msec) : 2000=17.34%, >=2000=82.66% 00:30:11.484 cpu : usr=0.02%, sys=0.77%, ctx=251, majf=0, minf=32769 00:30:11.484 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.6%, 16=9.2%, 32=18.5%, >=64=63.6% 00:30:11.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.484 complete : 0=0.0%, 4=97.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.1% 00:30:11.484 issued rwts: total=173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.484 job1: (groupid=0, jobs=1): err= 0: pid=925876: Wed May 15 02:56:13 2024 00:30:11.484 read: IOPS=0, BW=716KiB/s (733kB/s)(9216KiB/12880msec) 00:30:11.484 slat (msec): min=10, max=6318, avg=1197.08, stdev=2130.91 00:30:11.484 clat (msec): min=2105, max=12835, avg=9251.97, stdev=4394.94 00:30:11.484 lat (msec): min=4289, max=12879, avg=10449.05, stdev=3600.63 00:30:11.484 clat percentiles (msec): 00:30:11.484 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 2106], 20.00th=[ 4279], 00:30:11.484 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[12818], 60.00th=[12818], 00:30:11.484 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.484 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.484 | 99.99th=[12818] 00:30:11.484 lat (msec) : >=2000=100.00% 00:30:11.484 cpu : usr=0.00%, sys=0.09%, ctx=40, majf=0, minf=2305 00:30:11.484 IO depths : 1=11.1%, 2=22.2%, 4=44.4%, 8=22.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.484 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.484 issued rwts: total=9,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.484 job1: (groupid=0, jobs=1): err= 0: pid=925877: Wed May 15 02:56:13 2024 00:30:11.484 read: IOPS=18, BW=18.2MiB/s (19.0MB/s)(234MiB/12884msec) 00:30:11.485 slat (usec): min=90, max=3676.3k, avg=54760.03, stdev=360667.63 00:30:11.485 clat (msec): min=67, max=9170, avg=5428.49, stdev=2925.69 00:30:11.485 lat (msec): min=1063, max=9175, avg=5483.25, stdev=2923.46 00:30:11.485 clat percentiles (msec): 00:30:11.485 | 1.00th=[ 1062], 5.00th=[ 1133], 10.00th=[ 3004], 20.00th=[ 3171], 00:30:11.485 | 30.00th=[ 3406], 40.00th=[ 3540], 50.00th=[ 3708], 60.00th=[ 3977], 00:30:11.485 | 70.00th=[ 8926], 80.00th=[ 9060], 90.00th=[ 9060], 95.00th=[ 9060], 00:30:11.485 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:30:11.485 | 99.99th=[ 9194] 00:30:11.485 bw ( KiB/s): min=101209, max=112640, per=4.35%, avg=106924.50, stdev=8082.94, samples=2 00:30:11.485 iops : min= 98, max= 110, avg=104.00, stdev= 8.49, samples=2 00:30:11.485 lat (msec) : 100=0.43%, 2000=7.26%, >=2000=92.31% 00:30:11.485 cpu : usr=0.02%, sys=0.89%, ctx=410, majf=0, minf=32769 00:30:11.485 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.7%, >=64=73.1% 00:30:11.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.485 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:30:11.485 issued rwts: total=234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.485 job1: (groupid=0, jobs=1): err= 0: pid=925878: Wed May 15 02:56:13 2024 00:30:11.485 read: IOPS=1, BW=1827KiB/s (1871kB/s)(23.0MiB/12892msec) 00:30:11.485 slat (usec): min=892, max=4304.9k, avg=558095.79, stdev=1305772.34 00:30:11.485 clat (msec): min=55, max=12890, avg=11064.25, stdev=3569.82 00:30:11.485 lat (msec): min=4237, max=12891, avg=11622.34, stdev=2657.28 00:30:11.485 clat percentiles (msec): 00:30:11.485 | 1.00th=[ 56], 5.00th=[ 4245], 10.00th=[ 4245], 20.00th=[ 8557], 00:30:11.485 | 30.00th=[12684], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:30:11.485 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12953], 95.00th=[12953], 00:30:11.485 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.485 | 99.99th=[12953] 00:30:11.485 lat (msec) : 100=4.35%, >=2000=95.65% 00:30:11.485 cpu : usr=0.00%, sys=0.16%, ctx=37, majf=0, minf=5889 00:30:11.485 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:30:11.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.485 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:30:11.485 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.485 job1: (groupid=0, jobs=1): err= 0: pid=925879: Wed May 15 02:56:13 2024 00:30:11.485 read: IOPS=1, BW=1898KiB/s (1944kB/s)(24.0MiB/12947msec) 00:30:11.485 slat (usec): min=909, max=4207.9k, avg=451190.74, stdev=1076904.10 00:30:11.485 clat (msec): min=2117, max=12942, avg=11640.75, stdev=3044.85 00:30:11.485 lat (msec): min=4289, max=12946, avg=12091.94, stdev=2278.13 00:30:11.485 clat percentiles (msec): 00:30:11.485 | 1.00th=[ 2123], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[12818], 00:30:11.485 | 30.00th=[12818], 40.00th=[12953], 50.00th=[12953], 60.00th=[12953], 00:30:11.485 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.485 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.485 | 99.99th=[12953] 00:30:11.485 lat (msec) : >=2000=100.00% 00:30:11.485 cpu : usr=0.00%, sys=0.20%, ctx=49, majf=0, minf=6145 00:30:11.485 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:30:11.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.485 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:30:11.485 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.485 job1: (groupid=0, jobs=1): err= 0: pid=925880: Wed May 15 02:56:13 2024 00:30:11.485 read: IOPS=5, BW=5435KiB/s (5566kB/s)(69.0MiB/12999msec) 00:30:11.485 slat (usec): min=621, max=2185.8k, avg=157049.10, stdev=543450.98 00:30:11.485 clat (msec): min=2162, max=12996, avg=11009.39, stdev=3324.27 00:30:11.485 lat (msec): min=4227, max=12998, avg=11166.44, stdev=3151.64 00:30:11.485 clat percentiles (msec): 00:30:11.485 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 4245], 20.00th=[ 8658], 00:30:11.485 | 30.00th=[10805], 40.00th=[12818], 50.00th=[12953], 60.00th=[12953], 00:30:11.485 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.485 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.485 | 99.99th=[12953] 00:30:11.485 lat (msec) : >=2000=100.00% 00:30:11.485 cpu : usr=0.00%, sys=0.48%, ctx=93, majf=0, minf=17665 00:30:11.485 IO depths : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.6%, 16=23.2%, 32=46.4%, >=64=8.7% 00:30:11.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.485 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.485 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.485 job1: (groupid=0, jobs=1): err= 0: pid=925881: Wed May 15 02:56:13 2024 00:30:11.485 read: IOPS=2, BW=3011KiB/s (3083kB/s)(38.0MiB/12924msec) 00:30:11.485 slat (usec): min=883, max=4169.5k, avg=338311.36, stdev=905692.31 00:30:11.485 clat (msec): min=67, max=12922, avg=9259.13, stdev=3781.41 00:30:11.485 lat (msec): min=4237, max=12923, avg=9597.44, stdev=3501.60 00:30:11.485 clat percentiles (msec): 00:30:11.485 | 1.00th=[ 68], 5.00th=[ 4245], 10.00th=[ 4245], 20.00th=[ 4329], 00:30:11.485 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[12684], 00:30:11.485 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.485 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.485 | 99.99th=[12953] 00:30:11.485 lat (msec) : 100=2.63%, >=2000=97.37% 00:30:11.485 cpu : usr=0.01%, sys=0.27%, ctx=51, majf=0, minf=9729 00:30:11.485 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:30:11.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.485 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.485 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.485 job1: (groupid=0, jobs=1): err= 0: pid=925882: Wed May 15 02:56:13 2024 00:30:11.485 read: IOPS=9, BW=9571KiB/s (9801kB/s)(121MiB/12946msec) 00:30:11.485 slat (usec): min=567, max=4166.1k, avg=106530.20, stdev=517070.23 00:30:11.485 clat (msec): min=54, max=12941, avg=12018.63, stdev=2071.29 00:30:11.485 lat (msec): min=4221, max=12945, avg=12125.16, stdev=1758.75 00:30:11.485 clat percentiles (msec): 00:30:11.485 | 1.00th=[ 4212], 5.00th=[ 6409], 10.00th=[12416], 20.00th=[12416], 00:30:11.485 | 30.00th=[12416], 40.00th=[12550], 50.00th=[12550], 60.00th=[12550], 00:30:11.485 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12953], 95.00th=[12953], 00:30:11.485 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.485 | 99.99th=[12953] 00:30:11.485 lat (msec) : 100=0.83%, >=2000=99.17% 00:30:11.485 cpu : usr=0.01%, sys=0.77%, ctx=150, majf=0, minf=30977 00:30:11.485 IO depths : 1=0.8%, 2=1.7%, 4=3.3%, 8=6.6%, 16=13.2%, 32=26.4%, >=64=47.9% 00:30:11.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.485 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.485 issued rwts: total=121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.485 job1: (groupid=0, jobs=1): err= 0: pid=925883: Wed May 15 02:56:13 2024 00:30:11.485 read: IOPS=3, BW=3808KiB/s (3899kB/s)(48.0MiB/12909msec) 00:30:11.485 slat (usec): min=787, max=4196.9k, avg=267762.20, stdev=814458.89 00:30:11.485 clat (msec): min=55, max=12906, avg=11027.85, stdev=2635.66 00:30:11.485 lat (msec): min=4252, max=12908, avg=11295.61, stdev=2094.57 00:30:11.485 clat percentiles (msec): 00:30:11.485 | 1.00th=[ 56], 5.00th=[ 6409], 10.00th=[ 8557], 20.00th=[ 8658], 00:30:11.485 | 30.00th=[10671], 40.00th=[10805], 50.00th=[12684], 60.00th=[12818], 00:30:11.485 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.485 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.485 | 99.99th=[12953] 00:30:11.485 lat (msec) : 100=2.08%, >=2000=97.92% 00:30:11.485 cpu : usr=0.00%, sys=0.35%, ctx=48, majf=0, minf=12289 00:30:11.485 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:30:11.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.485 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.485 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.485 job1: (groupid=0, jobs=1): err= 0: pid=925884: Wed May 15 02:56:13 2024 00:30:11.485 read: IOPS=5, BW=6076KiB/s (6222kB/s)(77.0MiB/12976msec) 00:30:11.485 slat (usec): min=861, max=4211.8k, avg=141158.58, stdev=614628.61 00:30:11.485 clat (msec): min=2105, max=12973, avg=8588.70, stdev=3377.30 00:30:11.485 lat (msec): min=4247, max=12975, avg=8729.86, stdev=3329.59 00:30:11.485 clat percentiles (msec): 00:30:11.485 | 1.00th=[ 2106], 5.00th=[ 4329], 10.00th=[ 6208], 20.00th=[ 6208], 00:30:11.485 | 30.00th=[ 6275], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 6477], 00:30:11.485 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.485 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.485 | 99.99th=[12953] 00:30:11.485 lat (msec) : >=2000=100.00% 00:30:11.485 cpu : usr=0.00%, sys=0.56%, ctx=87, majf=0, minf=19713 00:30:11.485 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.4%, 16=20.8%, 32=41.6%, >=64=18.2% 00:30:11.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.485 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.485 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.485 job1: (groupid=0, jobs=1): err= 0: pid=925885: Wed May 15 02:56:13 2024 00:30:11.485 read: IOPS=4, BW=4969KiB/s (5089kB/s)(63.0MiB/12982msec) 00:30:11.485 slat (usec): min=809, max=4182.7k, avg=172438.37, stdev=683434.72 00:30:11.485 clat (msec): min=2117, max=12978, avg=11236.76, stdev=3211.25 00:30:11.485 lat (msec): min=4247, max=12981, avg=11409.20, stdev=2998.30 00:30:11.485 clat percentiles (msec): 00:30:11.485 | 1.00th=[ 2123], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 8557], 00:30:11.485 | 30.00th=[12818], 40.00th=[12953], 50.00th=[12953], 60.00th=[12953], 00:30:11.485 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.486 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.486 | 99.99th=[12953] 00:30:11.486 lat (msec) : >=2000=100.00% 00:30:11.486 cpu : usr=0.00%, sys=0.46%, ctx=73, majf=0, minf=16129 00:30:11.486 IO depths : 1=1.6%, 2=3.2%, 4=6.3%, 8=12.7%, 16=25.4%, 32=50.8%, >=64=0.0% 00:30:11.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.486 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.486 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.486 job1: (groupid=0, jobs=1): err= 0: pid=925886: Wed May 15 02:56:13 2024 00:30:11.486 read: IOPS=3, BW=3667KiB/s (3755kB/s)(46.0MiB/12847msec) 00:30:11.486 slat (usec): min=514, max=2110.5k, avg=233346.21, stdev=641096.99 00:30:11.486 clat (msec): min=2112, max=12841, avg=9282.05, stdev=3516.86 00:30:11.486 lat (msec): min=4222, max=12846, avg=9515.40, stdev=3384.17 00:30:11.486 clat percentiles (msec): 00:30:11.486 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6342], 00:30:11.486 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[12684], 00:30:11.486 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.486 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.486 | 99.99th=[12818] 00:30:11.486 lat (msec) : >=2000=100.00% 00:30:11.486 cpu : usr=0.02%, sys=0.30%, ctx=48, majf=0, minf=11777 00:30:11.486 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:30:11.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.486 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.486 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.486 job1: (groupid=0, jobs=1): err= 0: pid=925887: Wed May 15 02:56:13 2024 00:30:11.486 read: IOPS=1, BW=1990KiB/s (2038kB/s)(25.0MiB/12865msec) 00:30:11.486 slat (usec): min=885, max=2160.7k, avg=431416.31, stdev=862379.27 00:30:11.486 clat (msec): min=2078, max=12862, avg=10354.35, stdev=3603.06 00:30:11.486 lat (msec): min=4239, max=12864, avg=10785.77, stdev=3193.27 00:30:11.486 clat percentiles (msec): 00:30:11.486 | 1.00th=[ 2072], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6342], 00:30:11.486 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:30:11.486 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.486 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.486 | 99.99th=[12818] 00:30:11.486 lat (msec) : >=2000=100.00% 00:30:11.486 cpu : usr=0.01%, sys=0.18%, ctx=40, majf=0, minf=6401 00:30:11.486 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:30:11.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.486 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:30:11.486 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.486 job2: (groupid=0, jobs=1): err= 0: pid=925888: Wed May 15 02:56:13 2024 00:30:11.486 read: IOPS=4, BW=4135KiB/s (4234kB/s)(52.0MiB/12878msec) 00:30:11.486 slat (usec): min=558, max=2103.5k, avg=207193.95, stdev=607489.14 00:30:11.486 clat (msec): min=2103, max=12874, avg=9565.25, stdev=3499.57 00:30:11.486 lat (msec): min=4206, max=12877, avg=9772.44, stdev=3365.50 00:30:11.486 clat percentiles (msec): 00:30:11.486 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6342], 00:30:11.486 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12684], 00:30:11.486 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.486 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.486 | 99.99th=[12818] 00:30:11.486 lat (msec) : >=2000=100.00% 00:30:11.486 cpu : usr=0.00%, sys=0.35%, ctx=63, majf=0, minf=13313 00:30:11.486 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:30:11.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.486 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.486 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.486 job2: (groupid=0, jobs=1): err= 0: pid=925889: Wed May 15 02:56:13 2024 00:30:11.486 read: IOPS=51, BW=51.3MiB/s (53.8MB/s)(659MiB/12846msec) 00:30:11.486 slat (usec): min=61, max=2192.8k, avg=16223.25, stdev=162846.25 00:30:11.486 clat (msec): min=320, max=11102, avg=2423.55, stdev=3934.91 00:30:11.486 lat (msec): min=321, max=11103, avg=2439.78, stdev=3948.20 00:30:11.486 clat percentiles (msec): 00:30:11.486 | 1.00th=[ 330], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 384], 00:30:11.486 | 30.00th=[ 405], 40.00th=[ 409], 50.00th=[ 414], 60.00th=[ 418], 00:30:11.486 | 70.00th=[ 426], 80.00th=[ 4732], 90.00th=[10939], 95.00th=[10939], 00:30:11.486 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:30:11.486 | 99.99th=[11073] 00:30:11.486 bw ( KiB/s): min= 2052, max=368640, per=5.54%, avg=136192.50, stdev=164773.10, samples=8 00:30:11.486 iops : min= 2, max= 360, avg=133.00, stdev=160.91, samples=8 00:30:11.486 lat (msec) : 500=74.81%, 750=2.28%, >=2000=22.91% 00:30:11.486 cpu : usr=0.04%, sys=1.29%, ctx=543, majf=0, minf=32769 00:30:11.486 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:30:11.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.486 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:30:11.486 issued rwts: total=659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.486 job2: (groupid=0, jobs=1): err= 0: pid=925890: Wed May 15 02:56:13 2024 00:30:11.486 read: IOPS=10, BW=11.0MiB/s (11.5MB/s)(110MiB/10019msec) 00:30:11.486 slat (usec): min=710, max=2215.9k, avg=90907.20, stdev=392811.77 00:30:11.486 clat (msec): min=18, max=9999, avg=1194.34, stdev=2326.82 00:30:11.486 lat (msec): min=19, max=10018, avg=1285.25, stdev=2471.33 00:30:11.486 clat percentiles (msec): 00:30:11.486 | 1.00th=[ 20], 5.00th=[ 23], 10.00th=[ 50], 20.00th=[ 171], 00:30:11.486 | 30.00th=[ 292], 40.00th=[ 409], 50.00th=[ 535], 60.00th=[ 684], 00:30:11.486 | 70.00th=[ 810], 80.00th=[ 1011], 90.00th=[ 1301], 95.00th=[ 9866], 00:30:11.486 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:30:11.486 | 99.99th=[10000] 00:30:11.486 lat (msec) : 20=1.82%, 50=8.18%, 100=2.73%, 250=13.64%, 500=20.00% 00:30:11.486 lat (msec) : 750=19.09%, 1000=12.73%, 2000=13.64%, >=2000=8.18% 00:30:11.486 cpu : usr=0.02%, sys=1.15%, ctx=326, majf=0, minf=28161 00:30:11.486 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.3%, 16=14.5%, 32=29.1%, >=64=42.7% 00:30:11.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.486 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.486 issued rwts: total=110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.486 job2: (groupid=0, jobs=1): err= 0: pid=925891: Wed May 15 02:56:13 2024 00:30:11.486 read: IOPS=7, BW=7517KiB/s (7697kB/s)(95.0MiB/12942msec) 00:30:11.486 slat (usec): min=635, max=2095.6k, avg=114351.32, stdev=452987.72 00:30:11.486 clat (msec): min=2078, max=12940, avg=10748.09, stdev=3131.78 00:30:11.486 lat (msec): min=4173, max=12941, avg=10862.44, stdev=3007.72 00:30:11.486 clat percentiles (msec): 00:30:11.486 | 1.00th=[ 2072], 5.00th=[ 4212], 10.00th=[ 4329], 20.00th=[ 8490], 00:30:11.486 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:30:11.486 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.486 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.486 | 99.99th=[12953] 00:30:11.486 lat (msec) : >=2000=100.00% 00:30:11.486 cpu : usr=0.00%, sys=0.65%, ctx=95, majf=0, minf=24321 00:30:11.486 IO depths : 1=1.1%, 2=2.1%, 4=4.2%, 8=8.4%, 16=16.8%, 32=33.7%, >=64=33.7% 00:30:11.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.486 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.486 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.486 job2: (groupid=0, jobs=1): err= 0: pid=925892: Wed May 15 02:56:13 2024 00:30:11.486 read: IOPS=34, BW=34.7MiB/s (36.4MB/s)(348MiB/10023msec) 00:30:11.486 slat (usec): min=78, max=2088.7k, avg=28744.77, stdev=216638.79 00:30:11.486 clat (msec): min=16, max=9161, avg=1245.60, stdev=2275.06 00:30:11.486 lat (msec): min=22, max=9162, avg=1274.34, stdev=2313.58 00:30:11.486 clat percentiles (msec): 00:30:11.486 | 1.00th=[ 39], 5.00th=[ 111], 10.00th=[ 171], 20.00th=[ 317], 00:30:11.486 | 30.00th=[ 443], 40.00th=[ 477], 50.00th=[ 498], 60.00th=[ 510], 00:30:11.486 | 70.00th=[ 592], 80.00th=[ 667], 90.00th=[ 2903], 95.00th=[ 9060], 00:30:11.486 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:30:11.486 | 99.99th=[ 9194] 00:30:11.486 bw ( KiB/s): min=198656, max=198656, per=8.07%, avg=198656.00, stdev= 0.00, samples=1 00:30:11.486 iops : min= 194, max= 194, avg=194.00, stdev= 0.00, samples=1 00:30:11.486 lat (msec) : 20=0.29%, 50=1.72%, 100=2.30%, 250=10.92%, 500=40.80% 00:30:11.486 lat (msec) : 750=28.16%, 1000=2.30%, >=2000=13.51% 00:30:11.486 cpu : usr=0.03%, sys=1.50%, ctx=287, majf=0, minf=32769 00:30:11.486 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.2%, >=64=81.9% 00:30:11.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.486 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:30:11.486 issued rwts: total=348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.486 job2: (groupid=0, jobs=1): err= 0: pid=925893: Wed May 15 02:56:13 2024 00:30:11.486 read: IOPS=3, BW=3171KiB/s (3247kB/s)(40.0MiB/12916msec) 00:30:11.486 slat (usec): min=844, max=4158.6k, avg=270714.89, stdev=835819.47 00:30:11.486 clat (msec): min=2087, max=12914, avg=10232.89, stdev=3608.95 00:30:11.486 lat (msec): min=4205, max=12915, avg=10503.61, stdev=3381.20 00:30:11.486 clat percentiles (msec): 00:30:11.487 | 1.00th=[ 2089], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6342], 00:30:11.487 | 30.00th=[ 6477], 40.00th=[12684], 50.00th=[12818], 60.00th=[12953], 00:30:11.487 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.487 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.487 | 99.99th=[12953] 00:30:11.487 lat (msec) : >=2000=100.00% 00:30:11.487 cpu : usr=0.00%, sys=0.29%, ctx=64, majf=0, minf=10241 00:30:11.487 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:30:11.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.487 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.487 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.487 job2: (groupid=0, jobs=1): err= 0: pid=925894: Wed May 15 02:56:13 2024 00:30:11.487 read: IOPS=5, BW=5782KiB/s (5921kB/s)(73.0MiB/12928msec) 00:30:11.487 slat (usec): min=686, max=2108.9k, avg=147853.24, stdev=521766.29 00:30:11.487 clat (msec): min=2134, max=12926, avg=10467.12, stdev=3337.22 00:30:11.487 lat (msec): min=4202, max=12927, avg=10614.98, stdev=3199.15 00:30:11.487 clat percentiles (msec): 00:30:11.487 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6409], 00:30:11.487 | 30.00th=[ 8557], 40.00th=[12684], 50.00th=[12818], 60.00th=[12818], 00:30:11.487 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.487 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.487 | 99.99th=[12953] 00:30:11.487 lat (msec) : >=2000=100.00% 00:30:11.487 cpu : usr=0.02%, sys=0.41%, ctx=92, majf=0, minf=18689 00:30:11.487 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.0%, 16=21.9%, 32=43.8%, >=64=13.7% 00:30:11.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.487 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.487 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.487 job2: (groupid=0, jobs=1): err= 0: pid=925895: Wed May 15 02:56:13 2024 00:30:11.487 read: IOPS=3, BW=3486KiB/s (3570kB/s)(44.0MiB/12923msec) 00:30:11.487 slat (usec): min=737, max=4147.2k, avg=246297.51, stdev=796616.90 00:30:11.487 clat (msec): min=2084, max=12920, avg=8955.61, stdev=3307.71 00:30:11.487 lat (msec): min=4193, max=12922, avg=9201.91, stdev=3185.42 00:30:11.487 clat percentiles (msec): 00:30:11.487 | 1.00th=[ 2089], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6342], 00:30:11.487 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8490], 60.00th=[ 8490], 00:30:11.487 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.487 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.487 | 99.99th=[12953] 00:30:11.487 lat (msec) : >=2000=100.00% 00:30:11.487 cpu : usr=0.00%, sys=0.29%, ctx=67, majf=0, minf=11265 00:30:11.487 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:30:11.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.487 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.487 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.487 job2: (groupid=0, jobs=1): err= 0: pid=925896: Wed May 15 02:56:13 2024 00:30:11.487 read: IOPS=9, BW=9.82MiB/s (10.3MB/s)(99.0MiB/10080msec) 00:30:11.487 slat (usec): min=887, max=4187.2k, avg=101127.65, stdev=510743.46 00:30:11.487 clat (msec): min=67, max=10077, avg=1442.67, stdev=2638.27 00:30:11.487 lat (msec): min=80, max=10079, avg=1543.80, stdev=2773.45 00:30:11.487 clat percentiles (msec): 00:30:11.487 | 1.00th=[ 68], 5.00th=[ 130], 10.00th=[ 188], 20.00th=[ 305], 00:30:11.487 | 30.00th=[ 393], 40.00th=[ 514], 50.00th=[ 617], 60.00th=[ 735], 00:30:11.487 | 70.00th=[ 885], 80.00th=[ 1083], 90.00th=[ 3540], 95.00th=[10000], 00:30:11.487 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:30:11.487 | 99.99th=[10134] 00:30:11.487 lat (msec) : 100=4.04%, 250=11.11%, 500=24.24%, 750=22.22%, 1000=15.15% 00:30:11.487 lat (msec) : 2000=13.13%, >=2000=10.10% 00:30:11.487 cpu : usr=0.02%, sys=1.09%, ctx=326, majf=0, minf=25345 00:30:11.487 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.1%, 16=16.2%, 32=32.3%, >=64=36.4% 00:30:11.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.487 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.487 issued rwts: total=99,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.487 job2: (groupid=0, jobs=1): err= 0: pid=925897: Wed May 15 02:56:13 2024 00:30:11.487 read: IOPS=176, BW=177MiB/s (186MB/s)(1772MiB/10014msec) 00:30:11.487 slat (usec): min=51, max=2106.2k, avg=5639.67, stdev=83810.22 00:30:11.487 clat (msec): min=12, max=6836, avg=336.42, stdev=766.55 00:30:11.487 lat (msec): min=14, max=6879, avg=342.06, stdev=784.97 00:30:11.487 clat percentiles (msec): 00:30:11.487 | 1.00th=[ 32], 5.00th=[ 114], 10.00th=[ 153], 20.00th=[ 153], 00:30:11.487 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 155], 60.00th=[ 155], 00:30:11.487 | 70.00th=[ 330], 80.00th=[ 368], 90.00th=[ 489], 95.00th=[ 502], 00:30:11.487 | 99.00th=[ 6745], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:30:11.487 | 99.99th=[ 6812] 00:30:11.487 bw ( KiB/s): min=92160, max=851968, per=19.56%, avg=481171.86, stdev=293151.58, samples=7 00:30:11.487 iops : min= 90, max= 832, avg=469.86, stdev=286.30, samples=7 00:30:11.487 lat (msec) : 20=0.40%, 50=1.58%, 100=2.37%, 250=59.20%, 500=29.97% 00:30:11.487 lat (msec) : 750=4.51%, >=2000=1.98% 00:30:11.487 cpu : usr=0.04%, sys=2.47%, ctx=1545, majf=0, minf=32769 00:30:11.487 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:30:11.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.487 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.487 issued rwts: total=1772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.487 job2: (groupid=0, jobs=1): err= 0: pid=925898: Wed May 15 02:56:13 2024 00:30:11.487 read: IOPS=3, BW=3744KiB/s (3834kB/s)(47.0MiB/12855msec) 00:30:11.487 slat (usec): min=694, max=2088.2k, avg=228133.76, stdev=636172.49 00:30:11.487 clat (msec): min=2132, max=12852, avg=9038.51, stdev=3604.04 00:30:11.487 lat (msec): min=4200, max=12854, avg=9266.64, stdev=3495.10 00:30:11.487 clat percentiles (msec): 00:30:11.487 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4329], 00:30:11.487 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[10671], 60.00th=[10671], 00:30:11.487 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.487 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.487 | 99.99th=[12818] 00:30:11.487 lat (msec) : >=2000=100.00% 00:30:11.487 cpu : usr=0.00%, sys=0.35%, ctx=56, majf=0, minf=12033 00:30:11.487 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:30:11.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.487 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.487 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.487 job2: (groupid=0, jobs=1): err= 0: pid=925899: Wed May 15 02:56:13 2024 00:30:11.487 read: IOPS=5, BW=5528KiB/s (5661kB/s)(70.0MiB/12966msec) 00:30:11.487 slat (usec): min=617, max=2102.3k, avg=155185.95, stdev=538691.81 00:30:11.487 clat (msec): min=2102, max=12964, avg=10861.17, stdev=3283.27 00:30:11.487 lat (msec): min=4204, max=12965, avg=11016.36, stdev=3115.73 00:30:11.487 clat percentiles (msec): 00:30:11.487 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6409], 00:30:11.487 | 30.00th=[ 8557], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:30:11.487 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:30:11.487 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:30:11.487 | 99.99th=[12953] 00:30:11.487 lat (msec) : >=2000=100.00% 00:30:11.487 cpu : usr=0.00%, sys=0.43%, ctx=80, majf=0, minf=17921 00:30:11.487 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.4%, 16=22.9%, 32=45.7%, >=64=10.0% 00:30:11.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.487 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.487 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.487 job2: (groupid=0, jobs=1): err= 0: pid=925900: Wed May 15 02:56:13 2024 00:30:11.487 read: IOPS=13, BW=13.3MiB/s (14.0MB/s)(134MiB/10057msec) 00:30:11.488 slat (usec): min=511, max=2216.3k, avg=74790.60, stdev=352131.69 00:30:11.488 clat (msec): min=33, max=9982, avg=1818.48, stdev=2870.91 00:30:11.488 lat (msec): min=69, max=10001, avg=1893.27, stdev=2953.41 00:30:11.488 clat percentiles (msec): 00:30:11.488 | 1.00th=[ 70], 5.00th=[ 142], 10.00th=[ 209], 20.00th=[ 372], 00:30:11.488 | 30.00th=[ 506], 40.00th=[ 684], 50.00th=[ 818], 60.00th=[ 1053], 00:30:11.488 | 70.00th=[ 1183], 80.00th=[ 1385], 90.00th=[ 7886], 95.00th=[ 9866], 00:30:11.488 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:30:11.488 | 99.99th=[10000] 00:30:11.488 bw ( KiB/s): min=12288, max=12288, per=0.50%, avg=12288.00, stdev= 0.00, samples=1 00:30:11.488 iops : min= 12, max= 12, avg=12.00, stdev= 0.00, samples=1 00:30:11.488 lat (msec) : 50=0.75%, 100=2.99%, 250=8.21%, 500=17.16%, 750=16.42% 00:30:11.488 lat (msec) : 1000=13.43%, 2000=27.61%, >=2000=13.43% 00:30:11.488 cpu : usr=0.00%, sys=1.19%, ctx=323, majf=0, minf=32769 00:30:11.488 IO depths : 1=0.7%, 2=1.5%, 4=3.0%, 8=6.0%, 16=11.9%, 32=23.9%, >=64=53.0% 00:30:11.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.488 complete : 0=0.0%, 4=87.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=12.5% 00:30:11.488 issued rwts: total=134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.488 job3: (groupid=0, jobs=1): err= 0: pid=925902: Wed May 15 02:56:13 2024 00:30:11.488 read: IOPS=87, BW=87.4MiB/s (91.6MB/s)(1118MiB/12793msec) 00:30:11.488 slat (usec): min=61, max=2153.1k, avg=9565.68, stdev=122235.77 00:30:11.488 clat (msec): min=142, max=12747, avg=1226.90, stdev=2691.19 00:30:11.488 lat (msec): min=143, max=12748, avg=1236.46, stdev=2707.16 00:30:11.488 clat percentiles (msec): 00:30:11.488 | 1.00th=[ 144], 5.00th=[ 144], 10.00th=[ 146], 20.00th=[ 146], 00:30:11.488 | 30.00th=[ 148], 40.00th=[ 194], 50.00th=[ 247], 60.00th=[ 275], 00:30:11.488 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 6342], 95.00th=[ 8658], 00:30:11.488 | 99.00th=[10671], 99.50th=[12550], 99.90th=[12684], 99.95th=[12684], 00:30:11.488 | 99.99th=[12684] 00:30:11.488 bw ( KiB/s): min= 2048, max=667648, per=9.17%, avg=225508.00, stdev=267344.87, samples=9 00:30:11.488 iops : min= 2, max= 652, avg=220.22, stdev=261.08, samples=9 00:30:11.488 lat (msec) : 250=50.72%, 500=35.15%, >=2000=14.13% 00:30:11.488 cpu : usr=0.06%, sys=1.34%, ctx=936, majf=0, minf=32769 00:30:11.488 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:30:11.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.488 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.488 issued rwts: total=1118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.488 job3: (groupid=0, jobs=1): err= 0: pid=925903: Wed May 15 02:56:13 2024 00:30:11.488 read: IOPS=188, BW=189MiB/s (198MB/s)(2429MiB/12873msec) 00:30:11.488 slat (usec): min=51, max=2121.9k, avg=4421.36, stdev=72002.90 00:30:11.488 clat (msec): min=114, max=8693, avg=654.41, stdev=1744.73 00:30:11.488 lat (msec): min=114, max=8695, avg=658.83, stdev=1751.76 00:30:11.488 clat percentiles (msec): 00:30:11.488 | 1.00th=[ 115], 5.00th=[ 116], 10.00th=[ 117], 20.00th=[ 123], 00:30:11.488 | 30.00th=[ 130], 40.00th=[ 131], 50.00th=[ 132], 60.00th=[ 211], 00:30:11.488 | 70.00th=[ 342], 80.00th=[ 405], 90.00th=[ 447], 95.00th=[ 4463], 00:30:11.488 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:30:11.488 | 99.99th=[ 8658] 00:30:11.488 bw ( KiB/s): min= 1412, max=1071104, per=15.97%, avg=392813.08, stdev=377654.28, samples=12 00:30:11.488 iops : min= 1, max= 1046, avg=383.50, stdev=368.91, samples=12 00:30:11.488 lat (msec) : 250=63.89%, 500=28.16%, 750=1.28%, >=2000=6.67% 00:30:11.488 cpu : usr=0.03%, sys=2.36%, ctx=2481, majf=0, minf=32769 00:30:11.488 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:30:11.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.488 issued rwts: total=2429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.488 job3: (groupid=0, jobs=1): err= 0: pid=925904: Wed May 15 02:56:13 2024 00:30:11.488 read: IOPS=3, BW=3658KiB/s (3745kB/s)(46.0MiB/12878msec) 00:30:11.488 slat (usec): min=519, max=2077.6k, avg=234352.64, stdev=639576.04 00:30:11.488 clat (msec): min=2097, max=12841, avg=9820.87, stdev=3086.49 00:30:11.488 lat (msec): min=4170, max=12877, avg=10055.22, stdev=2890.02 00:30:11.488 clat percentiles (msec): 00:30:11.488 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6409], 00:30:11.488 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:30:11.488 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.488 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.488 | 99.99th=[12818] 00:30:11.488 lat (msec) : >=2000=100.00% 00:30:11.488 cpu : usr=0.00%, sys=0.30%, ctx=61, majf=0, minf=11777 00:30:11.488 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:30:11.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.488 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.488 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.488 job3: (groupid=0, jobs=1): err= 0: pid=925905: Wed May 15 02:56:13 2024 00:30:11.488 read: IOPS=5, BW=5281KiB/s (5408kB/s)(66.0MiB/12798msec) 00:30:11.488 slat (usec): min=793, max=2050.4k, avg=162338.69, stdev=530840.39 00:30:11.488 clat (msec): min=2082, max=12795, avg=9452.73, stdev=3343.28 00:30:11.488 lat (msec): min=4133, max=12797, avg=9615.07, stdev=3238.40 00:30:11.488 clat percentiles (msec): 00:30:11.488 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:30:11.488 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:30:11.488 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.488 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.488 | 99.99th=[12818] 00:30:11.488 lat (msec) : >=2000=100.00% 00:30:11.488 cpu : usr=0.00%, sys=0.49%, ctx=75, majf=0, minf=16897 00:30:11.488 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:30:11.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.488 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.488 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.488 job3: (groupid=0, jobs=1): err= 0: pid=925906: Wed May 15 02:56:13 2024 00:30:11.488 read: IOPS=5, BW=5416KiB/s (5546kB/s)(68.0MiB/12856msec) 00:30:11.488 slat (usec): min=781, max=2063.7k, avg=157923.57, stdev=528338.07 00:30:11.488 clat (msec): min=2116, max=12853, avg=10254.83, stdev=2969.49 00:30:11.488 lat (msec): min=4180, max=12855, avg=10412.76, stdev=2811.58 00:30:11.488 clat percentiles (msec): 00:30:11.488 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 8423], 00:30:11.488 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[12684], 00:30:11.488 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.488 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.488 | 99.99th=[12818] 00:30:11.488 lat (msec) : >=2000=100.00% 00:30:11.488 cpu : usr=0.00%, sys=0.47%, ctx=83, majf=0, minf=17409 00:30:11.488 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:30:11.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.488 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.488 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.488 job3: (groupid=0, jobs=1): err= 0: pid=925907: Wed May 15 02:56:13 2024 00:30:11.488 read: IOPS=36, BW=36.3MiB/s (38.0MB/s)(390MiB/10756msec) 00:30:11.488 slat (usec): min=444, max=2067.4k, avg=27306.47, stdev=202088.07 00:30:11.488 clat (msec): min=104, max=6993, avg=1635.87, stdev=1473.01 00:30:11.488 lat (msec): min=418, max=6996, avg=1663.18, stdev=1495.13 00:30:11.488 clat percentiles (msec): 00:30:11.488 | 1.00th=[ 418], 5.00th=[ 422], 10.00th=[ 430], 20.00th=[ 493], 00:30:11.488 | 30.00th=[ 617], 40.00th=[ 693], 50.00th=[ 894], 60.00th=[ 986], 00:30:11.488 | 70.00th=[ 2500], 80.00th=[ 2802], 90.00th=[ 3071], 95.00th=[ 3138], 00:30:11.488 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:30:11.488 | 99.99th=[ 7013] 00:30:11.488 bw ( KiB/s): min= 1551, max=227328, per=3.64%, avg=89651.67, stdev=91165.82, samples=6 00:30:11.488 iops : min= 1, max= 222, avg=87.33, stdev=89.10, samples=6 00:30:11.488 lat (msec) : 250=0.26%, 500=19.74%, 750=23.08%, 1000=16.92%, 2000=0.77% 00:30:11.488 lat (msec) : >=2000=39.23% 00:30:11.488 cpu : usr=0.00%, sys=1.09%, ctx=630, majf=0, minf=32769 00:30:11.488 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.2%, >=64=83.8% 00:30:11.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.488 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:30:11.488 issued rwts: total=390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.488 job3: (groupid=0, jobs=1): err= 0: pid=925908: Wed May 15 02:56:13 2024 00:30:11.488 read: IOPS=12, BW=12.0MiB/s (12.6MB/s)(129MiB/10745msec) 00:30:11.488 slat (usec): min=746, max=2150.9k, avg=82474.85, stdev=350797.88 00:30:11.488 clat (msec): min=104, max=10741, avg=3424.84, stdev=1688.18 00:30:11.488 lat (msec): min=2064, max=10742, avg=3507.32, stdev=1781.99 00:30:11.488 clat percentiles (msec): 00:30:11.488 | 1.00th=[ 2072], 5.00th=[ 2232], 10.00th=[ 2299], 20.00th=[ 2534], 00:30:11.488 | 30.00th=[ 2702], 40.00th=[ 2903], 50.00th=[ 3037], 60.00th=[ 3205], 00:30:11.488 | 70.00th=[ 3373], 80.00th=[ 3809], 90.00th=[ 4245], 95.00th=[ 8557], 00:30:11.488 | 99.00th=[10671], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:30:11.488 | 99.99th=[10805] 00:30:11.488 bw ( KiB/s): min= 1662, max= 2048, per=0.08%, avg=1855.00, stdev=272.94, samples=2 00:30:11.488 iops : min= 1, max= 2, avg= 1.50, stdev= 0.71, samples=2 00:30:11.488 lat (msec) : 250=0.78%, >=2000=99.22% 00:30:11.488 cpu : usr=0.00%, sys=1.20%, ctx=605, majf=0, minf=32769 00:30:11.488 IO depths : 1=0.8%, 2=1.6%, 4=3.1%, 8=6.2%, 16=12.4%, 32=24.8%, >=64=51.2% 00:30:11.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.488 complete : 0=0.0%, 4=66.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=33.3% 00:30:11.488 issued rwts: total=129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.489 job3: (groupid=0, jobs=1): err= 0: pid=925909: Wed May 15 02:56:13 2024 00:30:11.489 read: IOPS=4, BW=4548KiB/s (4657kB/s)(57.0MiB/12835msec) 00:30:11.489 slat (usec): min=527, max=2069.0k, avg=187615.43, stdev=578812.62 00:30:11.489 clat (msec): min=2139, max=12833, avg=9192.96, stdev=3378.84 00:30:11.489 lat (msec): min=4187, max=12833, avg=9380.57, stdev=3275.54 00:30:11.489 clat percentiles (msec): 00:30:11.489 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:30:11.489 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:30:11.489 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.489 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.489 | 99.99th=[12818] 00:30:11.489 lat (msec) : >=2000=100.00% 00:30:11.489 cpu : usr=0.00%, sys=0.26%, ctx=66, majf=0, minf=14593 00:30:11.489 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:30:11.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.489 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.489 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.489 job3: (groupid=0, jobs=1): err= 0: pid=925910: Wed May 15 02:56:13 2024 00:30:11.489 read: IOPS=25, BW=25.2MiB/s (26.4MB/s)(323MiB/12830msec) 00:30:11.489 slat (usec): min=451, max=2068.3k, avg=33166.29, stdev=237591.63 00:30:11.489 clat (msec): min=227, max=10667, avg=4262.99, stdev=4513.43 00:30:11.489 lat (msec): min=229, max=12572, avg=4296.16, stdev=4532.30 00:30:11.489 clat percentiles (msec): 00:30:11.489 | 1.00th=[ 228], 5.00th=[ 232], 10.00th=[ 236], 20.00th=[ 245], 00:30:11.489 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 1838], 60.00th=[ 4212], 00:30:11.489 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10268], 00:30:11.489 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10671], 99.95th=[10671], 00:30:11.489 | 99.99th=[10671] 00:30:11.489 bw ( KiB/s): min= 2052, max=292864, per=2.33%, avg=57344.57, stdev=106252.50, samples=7 00:30:11.489 iops : min= 2, max= 286, avg=56.00, stdev=103.76, samples=7 00:30:11.489 lat (msec) : 250=37.77%, 500=10.53%, 2000=2.17%, >=2000=49.54% 00:30:11.489 cpu : usr=0.01%, sys=0.69%, ctx=621, majf=0, minf=32769 00:30:11.489 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=9.9%, >=64=80.5% 00:30:11.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.489 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:30:11.489 issued rwts: total=323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.489 job3: (groupid=0, jobs=1): err= 0: pid=925911: Wed May 15 02:56:13 2024 00:30:11.489 read: IOPS=11, BW=11.7MiB/s (12.3MB/s)(126MiB/10730msec) 00:30:11.489 slat (msec): min=2, max=2150, avg=84.34, stdev=354.73 00:30:11.489 clat (msec): min=102, max=10572, avg=3250.86, stdev=1273.04 00:30:11.489 lat (msec): min=2062, max=10729, avg=3335.20, stdev=1407.70 00:30:11.489 clat percentiles (msec): 00:30:11.489 | 1.00th=[ 2056], 5.00th=[ 2265], 10.00th=[ 2333], 20.00th=[ 2534], 00:30:11.489 | 30.00th=[ 2668], 40.00th=[ 2869], 50.00th=[ 3004], 60.00th=[ 3171], 00:30:11.489 | 70.00th=[ 3339], 80.00th=[ 3708], 90.00th=[ 4111], 95.00th=[ 4329], 00:30:11.489 | 99.00th=[ 8658], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:30:11.489 | 99.99th=[10537] 00:30:11.489 lat (msec) : 250=0.79%, >=2000=99.21% 00:30:11.489 cpu : usr=0.00%, sys=1.18%, ctx=597, majf=0, minf=32257 00:30:11.489 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.3%, 16=12.7%, 32=25.4%, >=64=50.0% 00:30:11.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.489 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.489 issued rwts: total=126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.489 job3: (groupid=0, jobs=1): err= 0: pid=925912: Wed May 15 02:56:13 2024 00:30:11.489 read: IOPS=5, BW=5735KiB/s (5873kB/s)(72.0MiB/12855msec) 00:30:11.489 slat (usec): min=665, max=2063.4k, avg=149080.58, stdev=519104.02 00:30:11.489 clat (msec): min=2120, max=12851, avg=9455.06, stdev=3386.56 00:30:11.489 lat (msec): min=4157, max=12854, avg=9604.14, stdev=3294.12 00:30:11.489 clat percentiles (msec): 00:30:11.489 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:30:11.489 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12684], 00:30:11.489 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.489 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.489 | 99.99th=[12818] 00:30:11.489 lat (msec) : >=2000=100.00% 00:30:11.489 cpu : usr=0.02%, sys=0.39%, ctx=77, majf=0, minf=18433 00:30:11.489 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:30:11.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.489 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.489 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.489 job3: (groupid=0, jobs=1): err= 0: pid=925913: Wed May 15 02:56:13 2024 00:30:11.489 read: IOPS=3, BW=3358KiB/s (3439kB/s)(42.0MiB/12806msec) 00:30:11.489 slat (usec): min=872, max=2073.6k, avg=254794.74, stdev=664658.14 00:30:11.489 clat (msec): min=2104, max=12803, avg=9826.43, stdev=3126.67 00:30:11.489 lat (msec): min=4177, max=12805, avg=10081.23, stdev=2910.58 00:30:11.489 clat percentiles (msec): 00:30:11.489 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6477], 00:30:11.489 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12684], 00:30:11.489 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.489 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.489 | 99.99th=[12818] 00:30:11.489 lat (msec) : >=2000=100.00% 00:30:11.489 cpu : usr=0.02%, sys=0.28%, ctx=50, majf=0, minf=10753 00:30:11.489 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:30:11.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.489 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.489 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.489 job3: (groupid=0, jobs=1): err= 0: pid=925914: Wed May 15 02:56:13 2024 00:30:11.489 read: IOPS=14, BW=14.7MiB/s (15.4MB/s)(190MiB/12911msec) 00:30:11.489 slat (usec): min=76, max=2061.3k, avg=56849.87, stdev=305020.04 00:30:11.489 clat (msec): min=2108, max=10656, avg=6586.75, stdev=1869.76 00:30:11.489 lat (msec): min=2227, max=10667, avg=6643.60, stdev=1855.40 00:30:11.489 clat percentiles (msec): 00:30:11.489 | 1.00th=[ 2232], 5.00th=[ 2299], 10.00th=[ 4212], 20.00th=[ 4933], 00:30:11.489 | 30.00th=[ 6275], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 7953], 00:30:11.489 | 70.00th=[ 8087], 80.00th=[ 8154], 90.00th=[ 8356], 95.00th=[ 8356], 00:30:11.489 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:30:11.489 | 99.99th=[10671] 00:30:11.489 bw ( KiB/s): min= 1412, max=69632, per=1.04%, avg=25670.20, stdev=27942.44, samples=5 00:30:11.489 iops : min= 1, max= 68, avg=24.80, stdev=27.44, samples=5 00:30:11.489 lat (msec) : >=2000=100.00% 00:30:11.489 cpu : usr=0.00%, sys=0.97%, ctx=129, majf=0, minf=32769 00:30:11.489 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.2%, 16=8.4%, 32=16.8%, >=64=66.8% 00:30:11.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.489 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:30:11.489 issued rwts: total=190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.489 job4: (groupid=0, jobs=1): err= 0: pid=925915: Wed May 15 02:56:13 2024 00:30:11.489 read: IOPS=6, BW=6741KiB/s (6902kB/s)(71.0MiB/10786msec) 00:30:11.489 slat (usec): min=617, max=2019.4k, avg=150170.15, stdev=509242.75 00:30:11.489 clat (msec): min=123, max=10784, avg=6693.25, stdev=3140.37 00:30:11.489 lat (msec): min=2142, max=10785, avg=6843.43, stdev=3075.97 00:30:11.489 clat percentiles (msec): 00:30:11.489 | 1.00th=[ 124], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4329], 00:30:11.489 | 30.00th=[ 4396], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8658], 00:30:11.489 | 70.00th=[ 8658], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:30:11.489 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:30:11.489 | 99.99th=[10805] 00:30:11.489 lat (msec) : 250=1.41%, >=2000=98.59% 00:30:11.489 cpu : usr=0.02%, sys=0.59%, ctx=78, majf=0, minf=18177 00:30:11.489 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:30:11.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.489 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.489 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.489 job4: (groupid=0, jobs=1): err= 0: pid=925916: Wed May 15 02:56:13 2024 00:30:11.489 read: IOPS=27, BW=27.0MiB/s (28.3MB/s)(348MiB/12875msec) 00:30:11.489 slat (usec): min=470, max=2012.0k, avg=30978.46, stdev=185238.76 00:30:11.489 clat (msec): min=598, max=8530, avg=4251.74, stdev=1584.88 00:30:11.489 lat (msec): min=605, max=8545, avg=4282.71, stdev=1610.23 00:30:11.489 clat percentiles (msec): 00:30:11.489 | 1.00th=[ 600], 5.00th=[ 693], 10.00th=[ 2467], 20.00th=[ 2668], 00:30:11.489 | 30.00th=[ 3809], 40.00th=[ 4178], 50.00th=[ 4329], 60.00th=[ 4463], 00:30:11.489 | 70.00th=[ 5537], 80.00th=[ 5805], 90.00th=[ 6007], 95.00th=[ 6074], 00:30:11.489 | 99.00th=[ 6879], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:30:11.489 | 99.99th=[ 8557] 00:30:11.489 bw ( KiB/s): min= 1412, max=149504, per=2.04%, avg=50205.00, stdev=52682.87, samples=9 00:30:11.489 iops : min= 1, max= 146, avg=48.78, stdev=51.58, samples=9 00:30:11.489 lat (msec) : 750=5.75%, 1000=0.29%, 2000=1.15%, >=2000=92.82% 00:30:11.489 cpu : usr=0.02%, sys=1.04%, ctx=684, majf=0, minf=32769 00:30:11.489 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.2%, >=64=81.9% 00:30:11.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.489 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:30:11.489 issued rwts: total=348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.489 job4: (groupid=0, jobs=1): err= 0: pid=925917: Wed May 15 02:56:13 2024 00:30:11.489 read: IOPS=29, BW=29.7MiB/s (31.1MB/s)(381MiB/12845msec) 00:30:11.489 slat (usec): min=177, max=2078.4k, avg=28224.19, stdev=188118.83 00:30:11.489 clat (msec): min=703, max=10181, avg=4009.60, stdev=3972.71 00:30:11.489 lat (msec): min=706, max=10184, avg=4037.82, stdev=3979.14 00:30:11.489 clat percentiles (msec): 00:30:11.489 | 1.00th=[ 709], 5.00th=[ 735], 10.00th=[ 760], 20.00th=[ 986], 00:30:11.489 | 30.00th=[ 1099], 40.00th=[ 1301], 50.00th=[ 1368], 60.00th=[ 1603], 00:30:11.489 | 70.00th=[ 8490], 80.00th=[ 9731], 90.00th=[10000], 95.00th=[10134], 00:30:11.489 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:30:11.490 | 99.99th=[10134] 00:30:11.490 bw ( KiB/s): min= 1456, max=192512, per=2.35%, avg=57710.56, stdev=67877.47, samples=9 00:30:11.490 iops : min= 1, max= 188, avg=56.22, stdev=66.26, samples=9 00:30:11.490 lat (msec) : 750=8.14%, 1000=15.22%, 2000=40.42%, >=2000=36.22% 00:30:11.490 cpu : usr=0.02%, sys=1.08%, ctx=673, majf=0, minf=32769 00:30:11.490 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.5% 00:30:11.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.490 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:30:11.490 issued rwts: total=381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.490 job4: (groupid=0, jobs=1): err= 0: pid=925918: Wed May 15 02:56:13 2024 00:30:11.490 read: IOPS=91, BW=91.6MiB/s (96.0MB/s)(984MiB/10746msec) 00:30:11.490 slat (usec): min=55, max=2002.1k, avg=10803.14, stdev=82238.86 00:30:11.490 clat (msec): min=109, max=3283, avg=1255.83, stdev=934.41 00:30:11.490 lat (msec): min=398, max=3285, avg=1266.63, stdev=936.15 00:30:11.490 clat percentiles (msec): 00:30:11.490 | 1.00th=[ 401], 5.00th=[ 414], 10.00th=[ 456], 20.00th=[ 510], 00:30:11.490 | 30.00th=[ 542], 40.00th=[ 651], 50.00th=[ 709], 60.00th=[ 986], 00:30:11.490 | 70.00th=[ 1737], 80.00th=[ 2299], 90.00th=[ 2970], 95.00th=[ 3171], 00:30:11.490 | 99.00th=[ 3239], 99.50th=[ 3272], 99.90th=[ 3272], 99.95th=[ 3272], 00:30:11.490 | 99.99th=[ 3272] 00:30:11.490 bw ( KiB/s): min= 1662, max=313344, per=5.09%, avg=125339.29, stdev=103252.03, samples=14 00:30:11.490 iops : min= 1, max= 306, avg=122.36, stdev=100.89, samples=14 00:30:11.490 lat (msec) : 250=0.10%, 500=16.46%, 750=34.45%, 1000=9.55%, 2000=16.06% 00:30:11.490 lat (msec) : >=2000=23.37% 00:30:11.490 cpu : usr=0.05%, sys=1.83%, ctx=1507, majf=0, minf=32769 00:30:11.490 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:30:11.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.490 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.490 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.490 job4: (groupid=0, jobs=1): err= 0: pid=925919: Wed May 15 02:56:13 2024 00:30:11.490 read: IOPS=6, BW=6247KiB/s (6397kB/s)(66.0MiB/10819msec) 00:30:11.490 slat (usec): min=823, max=2022.2k, avg=161461.15, stdev=527026.73 00:30:11.490 clat (msec): min=161, max=10815, avg=8308.90, stdev=3068.51 00:30:11.490 lat (msec): min=2174, max=10818, avg=8470.36, stdev=2909.45 00:30:11.490 clat percentiles (msec): 00:30:11.490 | 1.00th=[ 163], 5.00th=[ 2232], 10.00th=[ 2299], 20.00th=[ 4463], 00:30:11.490 | 30.00th=[ 6611], 40.00th=[ 8658], 50.00th=[ 8792], 60.00th=[10671], 00:30:11.490 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:30:11.490 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:30:11.490 | 99.99th=[10805] 00:30:11.490 lat (msec) : 250=1.52%, >=2000=98.48% 00:30:11.490 cpu : usr=0.04%, sys=0.55%, ctx=84, majf=0, minf=16897 00:30:11.490 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:30:11.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.490 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.490 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.490 job4: (groupid=0, jobs=1): err= 0: pid=925920: Wed May 15 02:56:13 2024 00:30:11.490 read: IOPS=3, BW=3193KiB/s (3269kB/s)(40.0MiB/12829msec) 00:30:11.490 slat (usec): min=893, max=2069.5k, avg=268412.16, stdev=677746.06 00:30:11.490 clat (msec): min=2091, max=12825, avg=10227.70, stdev=3266.79 00:30:11.490 lat (msec): min=4155, max=12828, avg=10496.11, stdev=3012.34 00:30:11.490 clat percentiles (msec): 00:30:11.490 | 1.00th=[ 2089], 5.00th=[ 4144], 10.00th=[ 4212], 20.00th=[ 6342], 00:30:11.490 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12684], 60.00th=[12684], 00:30:11.490 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:30:11.490 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.490 | 99.99th=[12818] 00:30:11.490 lat (msec) : >=2000=100.00% 00:30:11.490 cpu : usr=0.01%, sys=0.29%, ctx=67, majf=0, minf=10241 00:30:11.490 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:30:11.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.490 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.490 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.490 job4: (groupid=0, jobs=1): err= 0: pid=925921: Wed May 15 02:56:13 2024 00:30:11.490 read: IOPS=46, BW=46.6MiB/s (48.8MB/s)(594MiB/12760msec) 00:30:11.490 slat (usec): min=60, max=2051.6k, avg=17904.38, stdev=166754.74 00:30:11.490 clat (msec): min=391, max=10996, avg=2670.19, stdev=3903.53 00:30:11.490 lat (msec): min=391, max=10999, avg=2688.09, stdev=3916.94 00:30:11.490 clat percentiles (msec): 00:30:11.490 | 1.00th=[ 401], 5.00th=[ 405], 10.00th=[ 405], 20.00th=[ 409], 00:30:11.490 | 30.00th=[ 409], 40.00th=[ 430], 50.00th=[ 506], 60.00th=[ 567], 00:30:11.490 | 70.00th=[ 600], 80.00th=[ 6409], 90.00th=[10805], 95.00th=[10939], 00:30:11.490 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:30:11.490 | 99.99th=[10939] 00:30:11.490 bw ( KiB/s): min= 2052, max=317440, per=4.32%, avg=106268.89, stdev=135157.70, samples=9 00:30:11.490 iops : min= 2, max= 310, avg=103.78, stdev=131.99, samples=9 00:30:11.490 lat (msec) : 500=48.48%, 750=24.07%, >=2000=27.44% 00:30:11.490 cpu : usr=0.03%, sys=1.18%, ctx=503, majf=0, minf=32769 00:30:11.490 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:30:11.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.490 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:30:11.490 issued rwts: total=594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.490 job4: (groupid=0, jobs=1): err= 0: pid=925922: Wed May 15 02:56:13 2024 00:30:11.490 read: IOPS=16, BW=16.2MiB/s (17.0MB/s)(208MiB/12812msec) 00:30:11.490 slat (usec): min=55, max=2047.0k, avg=51514.06, stdev=293242.07 00:30:11.490 clat (msec): min=458, max=12669, avg=7612.18, stdev=4438.00 00:30:11.490 lat (msec): min=461, max=12675, avg=7663.69, stdev=4431.12 00:30:11.490 clat percentiles (msec): 00:30:11.490 | 1.00th=[ 464], 5.00th=[ 558], 10.00th=[ 617], 20.00th=[ 3910], 00:30:11.490 | 30.00th=[ 4178], 40.00th=[ 6342], 50.00th=[ 8356], 60.00th=[10671], 00:30:11.490 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:30:11.490 | 99.00th=[12416], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:30:11.490 | 99.99th=[12684] 00:30:11.490 bw ( KiB/s): min= 2052, max=69632, per=0.96%, avg=23704.00, stdev=23818.85, samples=7 00:30:11.490 iops : min= 2, max= 68, avg=23.14, stdev=23.26, samples=7 00:30:11.490 lat (msec) : 500=2.40%, 750=13.94%, 2000=2.40%, >=2000=81.25% 00:30:11.490 cpu : usr=0.02%, sys=1.00%, ctx=164, majf=0, minf=32769 00:30:11.490 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.8%, 16=7.7%, 32=15.4%, >=64=69.7% 00:30:11.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.490 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:30:11.490 issued rwts: total=208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.490 job4: (groupid=0, jobs=1): err= 0: pid=925923: Wed May 15 02:56:13 2024 00:30:11.490 read: IOPS=119, BW=119MiB/s (125MB/s)(1290MiB/10806msec) 00:30:11.490 slat (usec): min=58, max=1968.3k, avg=8284.49, stdev=64274.75 00:30:11.490 clat (msec): min=109, max=2699, avg=994.62, stdev=665.99 00:30:11.490 lat (msec): min=453, max=2702, avg=1002.90, stdev=666.96 00:30:11.490 clat percentiles (msec): 00:30:11.490 | 1.00th=[ 460], 5.00th=[ 468], 10.00th=[ 485], 20.00th=[ 518], 00:30:11.490 | 30.00th=[ 600], 40.00th=[ 642], 50.00th=[ 751], 60.00th=[ 802], 00:30:11.490 | 70.00th=[ 894], 80.00th=[ 1200], 90.00th=[ 2198], 95.00th=[ 2467], 00:30:11.490 | 99.00th=[ 2668], 99.50th=[ 2702], 99.90th=[ 2702], 99.95th=[ 2702], 00:30:11.490 | 99.99th=[ 2702] 00:30:11.490 bw ( KiB/s): min= 8192, max=280576, per=6.45%, avg=158611.07, stdev=80747.75, samples=15 00:30:11.490 iops : min= 8, max= 274, avg=154.80, stdev=78.90, samples=15 00:30:11.490 lat (msec) : 250=0.08%, 500=12.25%, 750=37.60%, 1000=29.07%, 2000=1.94% 00:30:11.490 lat (msec) : >=2000=19.07% 00:30:11.490 cpu : usr=0.07%, sys=2.26%, ctx=1510, majf=0, minf=32769 00:30:11.490 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:30:11.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.490 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.490 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.490 job4: (groupid=0, jobs=1): err= 0: pid=925924: Wed May 15 02:56:13 2024 00:30:11.490 read: IOPS=5, BW=5602KiB/s (5737kB/s)(59.0MiB/10784msec) 00:30:11.490 slat (usec): min=681, max=2053.5k, avg=180651.80, stdev=569277.70 00:30:11.490 clat (msec): min=125, max=10782, avg=6468.41, stdev=3123.20 00:30:11.490 lat (msec): min=2141, max=10783, avg=6649.07, stdev=3057.55 00:30:11.490 clat percentiles (msec): 00:30:11.490 | 1.00th=[ 126], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4279], 00:30:11.490 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 6544], 60.00th=[ 8658], 00:30:11.490 | 70.00th=[ 8658], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:30:11.490 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:30:11.490 | 99.99th=[10805] 00:30:11.490 lat (msec) : 250=1.69%, >=2000=98.31% 00:30:11.490 cpu : usr=0.02%, sys=0.47%, ctx=65, majf=0, minf=15105 00:30:11.490 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:30:11.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.490 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.490 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.490 job4: (groupid=0, jobs=1): err= 0: pid=925925: Wed May 15 02:56:13 2024 00:30:11.490 read: IOPS=5, BW=5446KiB/s (5577kB/s)(68.0MiB/12786msec) 00:30:11.490 slat (usec): min=659, max=2045.8k, avg=157185.89, stdev=526162.05 00:30:11.490 clat (msec): min=2096, max=12784, avg=9280.21, stdev=3335.38 00:30:11.490 lat (msec): min=4123, max=12785, avg=9437.39, stdev=3242.37 00:30:11.490 clat percentiles (msec): 00:30:11.490 | 1.00th=[ 2089], 5.00th=[ 4144], 10.00th=[ 4178], 20.00th=[ 6275], 00:30:11.490 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[10537], 60.00th=[10671], 00:30:11.490 | 70.00th=[12550], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.490 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.490 | 99.99th=[12818] 00:30:11.490 lat (msec) : >=2000=100.00% 00:30:11.490 cpu : usr=0.01%, sys=0.38%, ctx=70, majf=0, minf=17409 00:30:11.490 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:30:11.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.491 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.491 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.491 job4: (groupid=0, jobs=1): err= 0: pid=925926: Wed May 15 02:56:13 2024 00:30:11.491 read: IOPS=4, BW=4262KiB/s (4365kB/s)(45.0MiB/10811msec) 00:30:11.491 slat (usec): min=773, max=2018.1k, avg=236696.87, stdev=630564.70 00:30:11.491 clat (msec): min=158, max=10809, avg=7372.16, stdev=3327.45 00:30:11.491 lat (msec): min=2156, max=10810, avg=7608.86, stdev=3178.18 00:30:11.491 clat percentiles (msec): 00:30:11.491 | 1.00th=[ 159], 5.00th=[ 2198], 10.00th=[ 2265], 20.00th=[ 4329], 00:30:11.491 | 30.00th=[ 4463], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[ 8658], 00:30:11.491 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:30:11.491 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:30:11.491 | 99.99th=[10805] 00:30:11.491 lat (msec) : 250=2.22%, >=2000=97.78% 00:30:11.491 cpu : usr=0.01%, sys=0.38%, ctx=70, majf=0, minf=11521 00:30:11.491 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:30:11.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.491 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.491 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.491 job4: (groupid=0, jobs=1): err= 0: pid=925927: Wed May 15 02:56:13 2024 00:30:11.491 read: IOPS=6, BW=6996KiB/s (7164kB/s)(88.0MiB/12880msec) 00:30:11.491 slat (usec): min=707, max=2049.9k, avg=122035.42, stdev=472991.30 00:30:11.491 clat (msec): min=2140, max=12878, avg=9815.63, stdev=3334.11 00:30:11.491 lat (msec): min=4160, max=12879, avg=9937.66, stdev=3245.31 00:30:11.491 clat percentiles (msec): 00:30:11.491 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:30:11.491 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12818], 00:30:11.491 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:30:11.491 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:30:11.491 | 99.99th=[12818] 00:30:11.491 lat (msec) : >=2000=100.00% 00:30:11.491 cpu : usr=0.02%, sys=0.61%, ctx=103, majf=0, minf=22529 00:30:11.491 IO depths : 1=1.1%, 2=2.3%, 4=4.5%, 8=9.1%, 16=18.2%, 32=36.4%, >=64=28.4% 00:30:11.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.491 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.491 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.491 job5: (groupid=0, jobs=1): err= 0: pid=925928: Wed May 15 02:56:13 2024 00:30:11.491 read: IOPS=203, BW=203MiB/s (213MB/s)(2033MiB/10013msec) 00:30:11.491 slat (usec): min=57, max=2120.9k, avg=4914.56, stdev=78149.49 00:30:11.491 clat (msec): min=12, max=4580, avg=367.63, stdev=593.52 00:30:11.491 lat (msec): min=13, max=4582, avg=372.54, stdev=602.58 00:30:11.491 clat percentiles (msec): 00:30:11.491 | 1.00th=[ 32], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 130], 00:30:11.491 | 30.00th=[ 201], 40.00th=[ 230], 50.00th=[ 236], 60.00th=[ 241], 00:30:11.491 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 275], 95.00th=[ 2433], 00:30:11.491 | 99.00th=[ 2500], 99.50th=[ 2500], 99.90th=[ 4597], 99.95th=[ 4597], 00:30:11.491 | 99.99th=[ 4597] 00:30:11.491 bw ( KiB/s): min= 6131, max=718848, per=19.83%, avg=487934.38, stdev=225843.81, samples=8 00:30:11.491 iops : min= 5, max= 702, avg=476.38, stdev=220.85, samples=8 00:30:11.491 lat (msec) : 20=0.39%, 50=1.57%, 100=1.38%, 250=61.58%, 500=28.14% 00:30:11.491 lat (msec) : >=2000=6.94% 00:30:11.491 cpu : usr=0.12%, sys=2.30%, ctx=2568, majf=0, minf=32769 00:30:11.491 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:30:11.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.491 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.491 issued rwts: total=2033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.491 job5: (groupid=0, jobs=1): err= 0: pid=925929: Wed May 15 02:56:13 2024 00:30:11.491 read: IOPS=33, BW=34.0MiB/s (35.6MB/s)(367MiB/10801msec) 00:30:11.491 slat (usec): min=508, max=2127.3k, avg=29039.49, stdev=214882.30 00:30:11.491 clat (msec): min=141, max=6744, avg=2165.36, stdev=2119.24 00:30:11.491 lat (msec): min=317, max=6748, avg=2194.40, stdev=2128.75 00:30:11.491 clat percentiles (msec): 00:30:11.491 | 1.00th=[ 317], 5.00th=[ 330], 10.00th=[ 342], 20.00th=[ 363], 00:30:11.491 | 30.00th=[ 368], 40.00th=[ 368], 50.00th=[ 376], 60.00th=[ 3473], 00:30:11.491 | 70.00th=[ 3540], 80.00th=[ 3641], 90.00th=[ 6409], 95.00th=[ 6544], 00:30:11.491 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:30:11.491 | 99.99th=[ 6745] 00:30:11.491 bw ( KiB/s): min= 1517, max=296960, per=4.99%, avg=122747.25, stdev=146288.21, samples=4 00:30:11.491 iops : min= 1, max= 290, avg=119.75, stdev=142.99, samples=4 00:30:11.491 lat (msec) : 250=0.27%, 500=52.86%, 2000=0.27%, >=2000=46.59% 00:30:11.491 cpu : usr=0.04%, sys=0.99%, ctx=955, majf=0, minf=32769 00:30:11.491 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.7%, >=64=82.8% 00:30:11.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.491 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:30:11.491 issued rwts: total=367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.491 job5: (groupid=0, jobs=1): err= 0: pid=925930: Wed May 15 02:56:13 2024 00:30:11.491 read: IOPS=40, BW=40.6MiB/s (42.5MB/s)(517MiB/12748msec) 00:30:11.491 slat (usec): min=701, max=2123.4k, avg=20504.81, stdev=178143.97 00:30:11.491 clat (msec): min=458, max=4914, avg=1657.91, stdev=1775.22 00:30:11.491 lat (msec): min=462, max=6956, avg=1678.42, stdev=1793.31 00:30:11.491 clat percentiles (msec): 00:30:11.491 | 1.00th=[ 460], 5.00th=[ 464], 10.00th=[ 468], 20.00th=[ 472], 00:30:11.491 | 30.00th=[ 477], 40.00th=[ 489], 50.00th=[ 527], 60.00th=[ 592], 00:30:11.491 | 70.00th=[ 2601], 80.00th=[ 4396], 90.00th=[ 4597], 95.00th=[ 4665], 00:30:11.491 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4933], 99.95th=[ 4933], 00:30:11.491 | 99.99th=[ 4933] 00:30:11.491 bw ( KiB/s): min= 1662, max=277972, per=6.48%, avg=159555.60, stdev=137092.15, samples=5 00:30:11.491 iops : min= 1, max= 271, avg=155.60, stdev=133.96, samples=5 00:30:11.491 lat (msec) : 500=43.52%, 750=25.15%, 1000=0.97%, >=2000=30.37% 00:30:11.491 cpu : usr=0.04%, sys=0.89%, ctx=1256, majf=0, minf=32769 00:30:11.491 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.8% 00:30:11.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.491 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:30:11.491 issued rwts: total=517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.491 job5: (groupid=0, jobs=1): err= 0: pid=925931: Wed May 15 02:56:13 2024 00:30:11.491 read: IOPS=8, BW=8369KiB/s (8570kB/s)(89.0MiB/10890msec) 00:30:11.491 slat (usec): min=672, max=2046.4k, avg=120585.67, stdev=461694.47 00:30:11.491 clat (msec): min=156, max=10886, avg=7530.66, stdev=3294.02 00:30:11.491 lat (msec): min=2146, max=10888, avg=7651.24, stdev=3216.53 00:30:11.491 clat percentiles (msec): 00:30:11.491 | 1.00th=[ 157], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4329], 00:30:11.491 | 30.00th=[ 4396], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[ 8658], 00:30:11.491 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:30:11.491 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:30:11.491 | 99.99th=[10939] 00:30:11.491 lat (msec) : 250=1.12%, >=2000=98.88% 00:30:11.491 cpu : usr=0.00%, sys=0.78%, ctx=98, majf=0, minf=22785 00:30:11.491 IO depths : 1=1.1%, 2=2.2%, 4=4.5%, 8=9.0%, 16=18.0%, 32=36.0%, >=64=29.2% 00:30:11.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.491 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:30:11.491 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.491 job5: (groupid=0, jobs=1): err= 0: pid=925932: Wed May 15 02:56:13 2024 00:30:11.491 read: IOPS=141, BW=141MiB/s (148MB/s)(1527MiB/10808msec) 00:30:11.491 slat (usec): min=53, max=2170.5k, avg=6965.34, stdev=92850.86 00:30:11.491 clat (msec): min=146, max=4605, avg=845.92, stdev=1259.29 00:30:11.491 lat (msec): min=146, max=4607, avg=852.88, stdev=1263.09 00:30:11.491 clat percentiles (msec): 00:30:11.491 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 174], 20.00th=[ 199], 00:30:11.491 | 30.00th=[ 253], 40.00th=[ 288], 50.00th=[ 334], 60.00th=[ 347], 00:30:11.491 | 70.00th=[ 405], 80.00th=[ 468], 90.00th=[ 2668], 95.00th=[ 4530], 00:30:11.491 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:30:11.491 | 99.99th=[ 4597] 00:30:11.491 bw ( KiB/s): min= 1568, max=522240, per=10.59%, avg=260593.36, stdev=213547.15, samples=11 00:30:11.491 iops : min= 1, max= 510, avg=254.36, stdev=208.67, samples=11 00:30:11.491 lat (msec) : 250=28.88%, 500=51.74%, 750=1.77%, 1000=0.20%, >=2000=17.42% 00:30:11.491 cpu : usr=0.06%, sys=1.98%, ctx=2267, majf=0, minf=32769 00:30:11.491 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:30:11.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.491 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.491 issued rwts: total=1527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.491 job5: (groupid=0, jobs=1): err= 0: pid=925933: Wed May 15 02:56:13 2024 00:30:11.491 read: IOPS=81, BW=81.0MiB/s (85.0MB/s)(878MiB/10836msec) 00:30:11.491 slat (usec): min=61, max=2091.2k, avg=12147.70, stdev=132855.72 00:30:11.491 clat (msec): min=164, max=4524, avg=1019.36, stdev=1080.81 00:30:11.491 lat (msec): min=170, max=4526, avg=1031.51, stdev=1088.29 00:30:11.491 clat percentiles (msec): 00:30:11.491 | 1.00th=[ 171], 5.00th=[ 230], 10.00th=[ 259], 20.00th=[ 268], 00:30:11.491 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 313], 60.00th=[ 426], 00:30:11.491 | 70.00th=[ 2140], 80.00th=[ 2433], 90.00th=[ 2534], 95.00th=[ 2601], 00:30:11.491 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:30:11.491 | 99.99th=[ 4530] 00:30:11.491 bw ( KiB/s): min= 1501, max=479232, per=7.81%, avg=192080.75, stdev=189662.54, samples=8 00:30:11.491 iops : min= 1, max= 468, avg=187.50, stdev=185.26, samples=8 00:30:11.491 lat (msec) : 250=7.74%, 500=55.13%, 750=5.35%, 1000=0.46%, 2000=0.46% 00:30:11.491 lat (msec) : >=2000=30.87% 00:30:11.491 cpu : usr=0.05%, sys=1.47%, ctx=1301, majf=0, minf=32769 00:30:11.491 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.8% 00:30:11.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.491 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.491 issued rwts: total=878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.492 job5: (groupid=0, jobs=1): err= 0: pid=925934: Wed May 15 02:56:13 2024 00:30:11.492 read: IOPS=128, BW=129MiB/s (135MB/s)(1392MiB/10821msec) 00:30:11.492 slat (usec): min=55, max=2137.6k, avg=7652.13, stdev=89016.38 00:30:11.492 clat (msec): min=150, max=4719, avg=928.20, stdev=1278.70 00:30:11.492 lat (msec): min=150, max=4719, avg=935.85, stdev=1282.69 00:30:11.492 clat percentiles (msec): 00:30:11.492 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 251], 00:30:11.492 | 30.00th=[ 288], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 430], 00:30:11.492 | 70.00th=[ 472], 80.00th=[ 810], 90.00th=[ 2668], 95.00th=[ 4396], 00:30:11.492 | 99.00th=[ 4665], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:30:11.492 | 99.99th=[ 4732] 00:30:11.492 bw ( KiB/s): min= 1539, max=573440, per=8.77%, avg=215850.92, stdev=195152.80, samples=12 00:30:11.492 iops : min= 1, max= 560, avg=210.75, stdev=190.63, samples=12 00:30:11.492 lat (msec) : 250=19.54%, 500=51.87%, 750=7.18%, 1000=3.02%, >=2000=18.39% 00:30:11.492 cpu : usr=0.12%, sys=1.94%, ctx=2220, majf=0, minf=32769 00:30:11.492 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:30:11.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.492 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.492 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.492 job5: (groupid=0, jobs=1): err= 0: pid=925935: Wed May 15 02:56:13 2024 00:30:11.492 read: IOPS=61, BW=61.8MiB/s (64.8MB/s)(670MiB/10841msec) 00:30:11.492 slat (usec): min=55, max=2129.1k, avg=15929.11, stdev=151817.56 00:30:11.492 clat (msec): min=161, max=4994, avg=1625.92, stdev=1533.29 00:30:11.492 lat (msec): min=163, max=6950, avg=1641.85, stdev=1543.99 00:30:11.492 clat percentiles (msec): 00:30:11.492 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 169], 20.00th=[ 510], 00:30:11.492 | 30.00th=[ 518], 40.00th=[ 531], 50.00th=[ 651], 60.00th=[ 760], 00:30:11.492 | 70.00th=[ 2500], 80.00th=[ 2769], 90.00th=[ 4329], 95.00th=[ 4463], 00:30:11.492 | 99.00th=[ 4597], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:30:11.492 | 99.99th=[ 5000] 00:30:11.492 bw ( KiB/s): min= 1492, max=259576, per=4.52%, avg=111098.80, stdev=106885.27, samples=10 00:30:11.492 iops : min= 1, max= 253, avg=108.40, stdev=104.36, samples=10 00:30:11.492 lat (msec) : 250=12.54%, 500=0.60%, 750=46.72%, 1000=0.75%, 2000=0.90% 00:30:11.492 lat (msec) : >=2000=38.51% 00:30:11.492 cpu : usr=0.04%, sys=1.34%, ctx=1411, majf=0, minf=32769 00:30:11.492 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:30:11.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.492 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:30:11.492 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.492 job5: (groupid=0, jobs=1): err= 0: pid=925936: Wed May 15 02:56:13 2024 00:30:11.492 read: IOPS=181, BW=181MiB/s (190MB/s)(1823MiB/10053msec) 00:30:11.492 slat (usec): min=50, max=2104.6k, avg=5481.00, stdev=75414.00 00:30:11.492 clat (msec): min=51, max=4595, avg=425.05, stdev=508.13 00:30:11.492 lat (msec): min=54, max=4597, avg=430.53, stdev=518.48 00:30:11.492 clat percentiles (msec): 00:30:11.492 | 1.00th=[ 118], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 255], 00:30:11.492 | 30.00th=[ 259], 40.00th=[ 262], 50.00th=[ 266], 60.00th=[ 271], 00:30:11.492 | 70.00th=[ 317], 80.00th=[ 351], 90.00th=[ 567], 95.00th=[ 1670], 00:30:11.492 | 99.00th=[ 1720], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:30:11.492 | 99.99th=[ 4597] 00:30:11.492 bw ( KiB/s): min=190464, max=505856, per=15.68%, avg=385881.00, stdev=109447.57, samples=9 00:30:11.492 iops : min= 186, max= 494, avg=376.78, stdev=106.97, samples=9 00:30:11.492 lat (msec) : 100=0.27%, 250=12.12%, 500=74.44%, 750=5.27%, 2000=6.91% 00:30:11.492 lat (msec) : >=2000=0.99% 00:30:11.492 cpu : usr=0.13%, sys=2.13%, ctx=2403, majf=0, minf=32769 00:30:11.492 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:30:11.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.492 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.492 issued rwts: total=1823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.492 job5: (groupid=0, jobs=1): err= 0: pid=925937: Wed May 15 02:56:13 2024 00:30:11.492 read: IOPS=4, BW=4805KiB/s (4921kB/s)(51.0MiB/10868msec) 00:30:11.492 slat (usec): min=919, max=2252.4k, avg=210373.89, stdev=622293.39 00:30:11.492 clat (msec): min=138, max=10866, avg=9434.97, stdev=2740.71 00:30:11.492 lat (msec): min=2115, max=10867, avg=9645.34, stdev=2403.96 00:30:11.492 clat percentiles (msec): 00:30:11.492 | 1.00th=[ 140], 5.00th=[ 4396], 10.00th=[ 4396], 20.00th=[ 6544], 00:30:11.492 | 30.00th=[10805], 40.00th=[10805], 50.00th=[10805], 60.00th=[10805], 00:30:11.492 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:30:11.492 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:30:11.492 | 99.99th=[10805] 00:30:11.492 lat (msec) : 250=1.96%, >=2000=98.04% 00:30:11.492 cpu : usr=0.00%, sys=0.43%, ctx=95, majf=0, minf=13057 00:30:11.492 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:30:11.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.492 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:30:11.492 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.492 job5: (groupid=0, jobs=1): err= 0: pid=925938: Wed May 15 02:56:13 2024 00:30:11.492 read: IOPS=179, BW=179MiB/s (188MB/s)(1948MiB/10878msec) 00:30:11.492 slat (usec): min=60, max=2076.3k, avg=5493.77, stdev=65372.05 00:30:11.492 clat (msec): min=150, max=4466, avg=680.16, stdev=866.88 00:30:11.492 lat (msec): min=151, max=4466, avg=685.65, stdev=870.11 00:30:11.492 clat percentiles (msec): 00:30:11.492 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 161], 00:30:11.492 | 30.00th=[ 161], 40.00th=[ 300], 50.00th=[ 330], 60.00th=[ 439], 00:30:11.492 | 70.00th=[ 558], 80.00th=[ 776], 90.00th=[ 2500], 95.00th=[ 2903], 00:30:11.492 | 99.00th=[ 2970], 99.50th=[ 4463], 99.90th=[ 4463], 99.95th=[ 4463], 00:30:11.492 | 99.99th=[ 4463] 00:30:11.492 bw ( KiB/s): min=45056, max=827392, per=10.82%, avg=266183.29, stdev=234885.21, samples=14 00:30:11.492 iops : min= 44, max= 808, avg=259.86, stdev=229.43, samples=14 00:30:11.492 lat (msec) : 250=36.14%, 500=31.72%, 750=9.03%, 1000=10.06%, >=2000=13.04% 00:30:11.492 cpu : usr=0.12%, sys=2.87%, ctx=1649, majf=0, minf=32769 00:30:11.492 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:30:11.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.492 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.492 issued rwts: total=1948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.492 job5: (groupid=0, jobs=1): err= 0: pid=925939: Wed May 15 02:56:13 2024 00:30:11.492 read: IOPS=92, BW=92.8MiB/s (97.3MB/s)(1009MiB/10874msec) 00:30:11.492 slat (usec): min=54, max=2097.9k, avg=10632.09, stdev=127129.28 00:30:11.492 clat (msec): min=128, max=6815, avg=847.08, stdev=1448.06 00:30:11.492 lat (msec): min=129, max=6818, avg=857.71, stdev=1460.65 00:30:11.492 clat percentiles (msec): 00:30:11.492 | 1.00th=[ 129], 5.00th=[ 131], 10.00th=[ 155], 20.00th=[ 251], 00:30:11.492 | 30.00th=[ 255], 40.00th=[ 259], 50.00th=[ 266], 60.00th=[ 271], 00:30:11.492 | 70.00th=[ 397], 80.00th=[ 439], 90.00th=[ 2366], 95.00th=[ 4732], 00:30:11.492 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:30:11.492 | 99.99th=[ 6812] 00:30:11.492 bw ( KiB/s): min=127230, max=629524, per=14.66%, avg=360724.40, stdev=213370.13, samples=5 00:30:11.492 iops : min= 124, max= 614, avg=352.00, stdev=208.27, samples=5 00:30:11.492 lat (msec) : 250=19.82%, 500=61.45%, 750=0.59%, >=2000=18.14% 00:30:11.492 cpu : usr=0.00%, sys=1.54%, ctx=1986, majf=0, minf=32769 00:30:11.492 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:30:11.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.492 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.492 issued rwts: total=1009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.492 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.492 job5: (groupid=0, jobs=1): err= 0: pid=925940: Wed May 15 02:56:13 2024 00:30:11.492 read: IOPS=53, BW=53.9MiB/s (56.5MB/s)(585MiB/10860msec) 00:30:11.492 slat (usec): min=849, max=2110.5k, avg=18274.89, stdev=168162.75 00:30:11.492 clat (msec): min=165, max=6830, avg=1200.93, stdev=1382.83 00:30:11.492 lat (msec): min=375, max=6834, avg=1219.21, stdev=1401.81 00:30:11.492 clat percentiles (msec): 00:30:11.492 | 1.00th=[ 376], 5.00th=[ 376], 10.00th=[ 380], 20.00th=[ 393], 00:30:11.492 | 30.00th=[ 464], 40.00th=[ 518], 50.00th=[ 531], 60.00th=[ 535], 00:30:11.492 | 70.00th=[ 542], 80.00th=[ 2467], 90.00th=[ 2668], 95.00th=[ 2802], 00:30:11.492 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:30:11.492 | 99.99th=[ 6812] 00:30:11.492 bw ( KiB/s): min=24576, max=286720, per=7.60%, avg=187035.80, stdev=106834.26, samples=5 00:30:11.493 iops : min= 24, max= 280, avg=182.40, stdev=104.35, samples=5 00:30:11.493 lat (msec) : 250=0.17%, 500=36.92%, 750=35.21%, >=2000=27.69% 00:30:11.493 cpu : usr=0.03%, sys=1.20%, ctx=1698, majf=0, minf=32769 00:30:11.493 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:30:11.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.493 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:30:11.493 issued rwts: total=585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.493 00:30:11.493 Run status group 0 (all jobs): 00:30:11.493 READ: bw=2403MiB/s (2520MB/s), 716KiB/s-203MiB/s (733kB/s-213MB/s), io=30.5GiB (32.8GB), run=10013-13002msec 00:30:11.493 00:30:11.493 Disk stats (read/write): 00:30:11.493 nvme0n1: ios=36066/0, merge=0/0, ticks=9188466/0, in_queue=9188466, util=98.41% 00:30:11.493 nvme1n1: ios=6769/0, merge=0/0, ticks=8003767/0, in_queue=8003767, util=98.67% 00:30:11.493 nvme2n1: ios=27783/0, merge=0/0, ticks=11939834/0, in_queue=11939834, util=98.84% 00:30:11.493 nvme3n1: ios=40120/0, merge=0/0, ticks=12532990/0, in_queue=12532990, util=98.75% 00:30:11.493 nvme4n1: ios=33846/0, merge=0/0, ticks=12598065/0, in_queue=12598065, util=99.08% 00:30:11.493 nvme5n1: ios=103111/0, merge=0/0, ticks=9155689/0, in_queue=9155689, util=99.32% 00:30:11.493 02:56:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:30:11.493 02:56:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:30:11.493 02:56:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:30:11.493 02:56:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:30:11.493 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # local i=0 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # grep -q -w SPDK00000000000000 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1228 -- # return 0 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:30:11.493 02:56:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:12.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:12.429 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:30:12.429 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # local i=0 00:30:12.687 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:30:12.687 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # grep -q -w SPDK00000000000001 00:30:12.687 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:30:12.687 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:30:12.687 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1228 -- # return 0 00:30:12.687 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:12.687 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:12.687 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:30:12.687 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:12.687 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:30:12.687 02:56:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:30:13.623 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # local i=0 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # grep -q -w SPDK00000000000002 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1228 -- # return 0 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:30:13.623 02:56:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:30:14.559 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:30:14.559 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:30:14.559 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # local i=0 00:30:14.559 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:30:14.559 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # grep -q -w SPDK00000000000003 00:30:14.559 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:30:14.559 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:30:14.559 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1228 -- # return 0 00:30:14.559 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:14.560 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.560 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:30:14.560 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.560 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:30:14.560 02:56:17 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:30:15.497 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # local i=0 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # grep -q -w SPDK00000000000004 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1228 -- # return 0 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:30:15.497 02:56:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:30:16.433 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:30:16.433 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:30:16.433 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # local i=0 00:30:16.433 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:30:16.433 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1217 -- # grep -q -w SPDK00000000000005 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1228 -- # return 0 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:16.693 rmmod nvme_rdma 00:30:16.693 rmmod nvme_fabrics 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 924614 ']' 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 924614 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@947 -- # '[' -z 924614 ']' 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@951 -- # kill -0 924614 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # uname 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 924614 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@965 -- # echo 'killing process with pid 924614' 00:30:16.693 killing process with pid 924614 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@966 -- # kill 924614 00:30:16.693 [2024-05-15 02:56:19.874011] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:16.693 02:56:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@971 -- # wait 924614 00:30:16.693 [2024-05-15 02:56:19.937702] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:30:17.262 02:56:20 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:17.262 02:56:20 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:17.262 00:30:17.262 real 0m34.746s 00:30:17.262 user 1m55.782s 00:30:17.262 sys 0m15.905s 00:30:17.262 02:56:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:17.262 02:56:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:30:17.262 ************************************ 00:30:17.262 END TEST nvmf_srq_overwhelm 00:30:17.262 ************************************ 00:30:17.262 02:56:20 nvmf_rdma -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:30:17.262 02:56:20 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:17.262 02:56:20 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:17.262 02:56:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:17.262 ************************************ 00:30:17.262 START TEST nvmf_shutdown 00:30:17.262 ************************************ 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:30:17.262 * Looking for test storage... 00:30:17.262 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.262 02:56:20 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:17.263 02:56:20 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:17.521 ************************************ 00:30:17.521 START TEST nvmf_shutdown_tc1 00:30:17.521 ************************************ 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc1 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:17.521 02:56:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:24.129 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:24.129 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:24.129 Found net devices under 0000:18:00.0: mlx_0_0 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:24.129 Found net devices under 0000:18:00.1: mlx_0_1 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:24.129 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:24.130 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:24.130 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:30:24.130 altname enp24s0f0np0 00:30:24.130 altname ens785f0np0 00:30:24.130 inet 192.168.100.8/24 scope global mlx_0_0 00:30:24.130 valid_lft forever preferred_lft forever 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:24.130 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:24.130 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:30:24.130 altname enp24s0f1np1 00:30:24.130 altname ens785f1np1 00:30:24.130 inet 192.168.100.9/24 scope global mlx_0_1 00:30:24.130 valid_lft forever preferred_lft forever 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:24.130 192.168.100.9' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:24.130 192.168.100.9' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:24.130 192.168.100.9' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=931374 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 931374 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 931374 ']' 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:24.130 02:56:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.130 [2024-05-15 02:56:27.021758] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:24.130 [2024-05-15 02:56:27.021832] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.130 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.130 [2024-05-15 02:56:27.121248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:24.131 [2024-05-15 02:56:27.168543] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.131 [2024-05-15 02:56:27.168593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.131 [2024-05-15 02:56:27.168607] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.131 [2024-05-15 02:56:27.168620] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.131 [2024-05-15 02:56:27.168631] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.131 [2024-05-15 02:56:27.168745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:24.131 [2024-05-15 02:56:27.168952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.131 [2024-05-15 02:56:27.168849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:24.131 [2024-05-15 02:56:27.168951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:24.131 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:24.131 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:30:24.131 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:24.131 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:24.131 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.131 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.131 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:24.131 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.131 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.131 [2024-05-15 02:56:27.363313] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f29060/0x1f2d550) succeed. 00:30:24.131 [2024-05-15 02:56:27.378385] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f2a6a0/0x1f6ebe0) succeed. 00:30:24.390 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.390 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:24.390 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:24.390 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:24.390 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.390 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.391 02:56:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.391 Malloc1 00:30:24.391 [2024-05-15 02:56:27.636660] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:24.391 [2024-05-15 02:56:27.637039] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:24.391 Malloc2 00:30:24.650 Malloc3 00:30:24.650 Malloc4 00:30:24.650 Malloc5 00:30:24.650 Malloc6 00:30:24.650 Malloc7 00:30:24.650 Malloc8 00:30:24.910 Malloc9 00:30:24.910 Malloc10 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=931508 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 931508 /var/tmp/bdevperf.sock 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 931508 ']' 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:24.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:24.910 { 00:30:24.910 "params": { 00:30:24.910 "name": "Nvme$subsystem", 00:30:24.910 "trtype": "$TEST_TRANSPORT", 00:30:24.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.910 "adrfam": "ipv4", 00:30:24.910 "trsvcid": "$NVMF_PORT", 00:30:24.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.910 "hdgst": ${hdgst:-false}, 00:30:24.910 "ddgst": ${ddgst:-false} 00:30:24.910 }, 00:30:24.910 "method": "bdev_nvme_attach_controller" 00:30:24.910 } 00:30:24.910 EOF 00:30:24.910 )") 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:24.910 { 00:30:24.910 "params": { 00:30:24.910 "name": "Nvme$subsystem", 00:30:24.910 "trtype": "$TEST_TRANSPORT", 00:30:24.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.910 "adrfam": "ipv4", 00:30:24.910 "trsvcid": "$NVMF_PORT", 00:30:24.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.910 "hdgst": ${hdgst:-false}, 00:30:24.910 "ddgst": ${ddgst:-false} 00:30:24.910 }, 00:30:24.910 "method": "bdev_nvme_attach_controller" 00:30:24.910 } 00:30:24.910 EOF 00:30:24.910 )") 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:24.910 { 00:30:24.910 "params": { 00:30:24.910 "name": "Nvme$subsystem", 00:30:24.910 "trtype": "$TEST_TRANSPORT", 00:30:24.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.910 "adrfam": "ipv4", 00:30:24.910 "trsvcid": "$NVMF_PORT", 00:30:24.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.910 "hdgst": ${hdgst:-false}, 00:30:24.910 "ddgst": ${ddgst:-false} 00:30:24.910 }, 00:30:24.910 "method": "bdev_nvme_attach_controller" 00:30:24.910 } 00:30:24.910 EOF 00:30:24.910 )") 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:24.910 { 00:30:24.910 "params": { 00:30:24.910 "name": "Nvme$subsystem", 00:30:24.910 "trtype": "$TEST_TRANSPORT", 00:30:24.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.910 "adrfam": "ipv4", 00:30:24.910 "trsvcid": "$NVMF_PORT", 00:30:24.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.910 "hdgst": ${hdgst:-false}, 00:30:24.910 "ddgst": ${ddgst:-false} 00:30:24.910 }, 00:30:24.910 "method": "bdev_nvme_attach_controller" 00:30:24.910 } 00:30:24.910 EOF 00:30:24.910 )") 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:24.910 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:24.910 { 00:30:24.910 "params": { 00:30:24.910 "name": "Nvme$subsystem", 00:30:24.910 "trtype": "$TEST_TRANSPORT", 00:30:24.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.910 "adrfam": "ipv4", 00:30:24.910 "trsvcid": "$NVMF_PORT", 00:30:24.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.910 "hdgst": ${hdgst:-false}, 00:30:24.910 "ddgst": ${ddgst:-false} 00:30:24.911 }, 00:30:24.911 "method": "bdev_nvme_attach_controller" 00:30:24.911 } 00:30:24.911 EOF 00:30:24.911 )") 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:24.911 { 00:30:24.911 "params": { 00:30:24.911 "name": "Nvme$subsystem", 00:30:24.911 "trtype": "$TEST_TRANSPORT", 00:30:24.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.911 "adrfam": "ipv4", 00:30:24.911 "trsvcid": "$NVMF_PORT", 00:30:24.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.911 "hdgst": ${hdgst:-false}, 00:30:24.911 "ddgst": ${ddgst:-false} 00:30:24.911 }, 00:30:24.911 "method": "bdev_nvme_attach_controller" 00:30:24.911 } 00:30:24.911 EOF 00:30:24.911 )") 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:24.911 [2024-05-15 02:56:28.149113] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:24.911 [2024-05-15 02:56:28.149188] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:24.911 { 00:30:24.911 "params": { 00:30:24.911 "name": "Nvme$subsystem", 00:30:24.911 "trtype": "$TEST_TRANSPORT", 00:30:24.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.911 "adrfam": "ipv4", 00:30:24.911 "trsvcid": "$NVMF_PORT", 00:30:24.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.911 "hdgst": ${hdgst:-false}, 00:30:24.911 "ddgst": ${ddgst:-false} 00:30:24.911 }, 00:30:24.911 "method": "bdev_nvme_attach_controller" 00:30:24.911 } 00:30:24.911 EOF 00:30:24.911 )") 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:24.911 { 00:30:24.911 "params": { 00:30:24.911 "name": "Nvme$subsystem", 00:30:24.911 "trtype": "$TEST_TRANSPORT", 00:30:24.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.911 "adrfam": "ipv4", 00:30:24.911 "trsvcid": "$NVMF_PORT", 00:30:24.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.911 "hdgst": ${hdgst:-false}, 00:30:24.911 "ddgst": ${ddgst:-false} 00:30:24.911 }, 00:30:24.911 "method": "bdev_nvme_attach_controller" 00:30:24.911 } 00:30:24.911 EOF 00:30:24.911 )") 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:24.911 { 00:30:24.911 "params": { 00:30:24.911 "name": "Nvme$subsystem", 00:30:24.911 "trtype": "$TEST_TRANSPORT", 00:30:24.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.911 "adrfam": "ipv4", 00:30:24.911 "trsvcid": "$NVMF_PORT", 00:30:24.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.911 "hdgst": ${hdgst:-false}, 00:30:24.911 "ddgst": ${ddgst:-false} 00:30:24.911 }, 00:30:24.911 "method": "bdev_nvme_attach_controller" 00:30:24.911 } 00:30:24.911 EOF 00:30:24.911 )") 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:24.911 { 00:30:24.911 "params": { 00:30:24.911 "name": "Nvme$subsystem", 00:30:24.911 "trtype": "$TEST_TRANSPORT", 00:30:24.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.911 "adrfam": "ipv4", 00:30:24.911 "trsvcid": "$NVMF_PORT", 00:30:24.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.911 "hdgst": ${hdgst:-false}, 00:30:24.911 "ddgst": ${ddgst:-false} 00:30:24.911 }, 00:30:24.911 "method": "bdev_nvme_attach_controller" 00:30:24.911 } 00:30:24.911 EOF 00:30:24.911 )") 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:24.911 02:56:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:24.911 "params": { 00:30:24.911 "name": "Nvme1", 00:30:24.911 "trtype": "rdma", 00:30:24.911 "traddr": "192.168.100.8", 00:30:24.911 "adrfam": "ipv4", 00:30:24.911 "trsvcid": "4420", 00:30:24.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:24.911 "hdgst": false, 00:30:24.911 "ddgst": false 00:30:24.911 }, 00:30:24.911 "method": "bdev_nvme_attach_controller" 00:30:24.911 },{ 00:30:24.911 "params": { 00:30:24.911 "name": "Nvme2", 00:30:24.911 "trtype": "rdma", 00:30:24.911 "traddr": "192.168.100.8", 00:30:24.911 "adrfam": "ipv4", 00:30:24.911 "trsvcid": "4420", 00:30:24.911 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:24.911 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:24.911 "hdgst": false, 00:30:24.911 "ddgst": false 00:30:24.911 }, 00:30:24.911 "method": "bdev_nvme_attach_controller" 00:30:24.911 },{ 00:30:24.911 "params": { 00:30:24.911 "name": "Nvme3", 00:30:24.911 "trtype": "rdma", 00:30:24.911 "traddr": "192.168.100.8", 00:30:24.911 "adrfam": "ipv4", 00:30:24.911 "trsvcid": "4420", 00:30:24.911 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:24.911 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:24.911 "hdgst": false, 00:30:24.911 "ddgst": false 00:30:24.911 }, 00:30:24.911 "method": "bdev_nvme_attach_controller" 00:30:24.911 },{ 00:30:24.911 "params": { 00:30:24.911 "name": "Nvme4", 00:30:24.911 "trtype": "rdma", 00:30:24.911 "traddr": "192.168.100.8", 00:30:24.911 "adrfam": "ipv4", 00:30:24.911 "trsvcid": "4420", 00:30:24.911 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:24.911 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:24.911 "hdgst": false, 00:30:24.911 "ddgst": false 00:30:24.911 }, 00:30:24.911 "method": "bdev_nvme_attach_controller" 00:30:24.911 },{ 00:30:24.911 "params": { 00:30:24.911 "name": "Nvme5", 00:30:24.911 "trtype": "rdma", 00:30:24.911 "traddr": "192.168.100.8", 00:30:24.911 "adrfam": "ipv4", 00:30:24.911 "trsvcid": "4420", 00:30:24.911 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:24.911 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:24.911 "hdgst": false, 00:30:24.911 "ddgst": false 00:30:24.911 }, 00:30:24.911 "method": "bdev_nvme_attach_controller" 00:30:24.911 },{ 00:30:24.911 "params": { 00:30:24.911 "name": "Nvme6", 00:30:24.911 "trtype": "rdma", 00:30:24.911 "traddr": "192.168.100.8", 00:30:24.911 "adrfam": "ipv4", 00:30:24.911 "trsvcid": "4420", 00:30:24.911 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:24.911 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:24.911 "hdgst": false, 00:30:24.911 "ddgst": false 00:30:24.911 }, 00:30:24.911 "method": "bdev_nvme_attach_controller" 00:30:24.912 },{ 00:30:24.912 "params": { 00:30:24.912 "name": "Nvme7", 00:30:24.912 "trtype": "rdma", 00:30:24.912 "traddr": "192.168.100.8", 00:30:24.912 "adrfam": "ipv4", 00:30:24.912 "trsvcid": "4420", 00:30:24.912 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:24.912 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:24.912 "hdgst": false, 00:30:24.912 "ddgst": false 00:30:24.912 }, 00:30:24.912 "method": "bdev_nvme_attach_controller" 00:30:24.912 },{ 00:30:24.912 "params": { 00:30:24.912 "name": "Nvme8", 00:30:24.912 "trtype": "rdma", 00:30:24.912 "traddr": "192.168.100.8", 00:30:24.912 "adrfam": "ipv4", 00:30:24.912 "trsvcid": "4420", 00:30:24.912 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:24.912 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:24.912 "hdgst": false, 00:30:24.912 "ddgst": false 00:30:24.912 }, 00:30:24.912 "method": "bdev_nvme_attach_controller" 00:30:24.912 },{ 00:30:24.912 "params": { 00:30:24.912 "name": "Nvme9", 00:30:24.912 "trtype": "rdma", 00:30:24.912 "traddr": "192.168.100.8", 00:30:24.912 "adrfam": "ipv4", 00:30:24.912 "trsvcid": "4420", 00:30:24.912 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:24.912 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:24.912 "hdgst": false, 00:30:24.912 "ddgst": false 00:30:24.912 }, 00:30:24.912 "method": "bdev_nvme_attach_controller" 00:30:24.912 },{ 00:30:24.912 "params": { 00:30:24.912 "name": "Nvme10", 00:30:24.912 "trtype": "rdma", 00:30:24.912 "traddr": "192.168.100.8", 00:30:24.912 "adrfam": "ipv4", 00:30:24.912 "trsvcid": "4420", 00:30:24.912 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:24.912 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:24.912 "hdgst": false, 00:30:24.912 "ddgst": false 00:30:24.912 }, 00:30:24.912 "method": "bdev_nvme_attach_controller" 00:30:24.912 }' 00:30:25.171 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.171 [2024-05-15 02:56:28.259986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.171 [2024-05-15 02:56:28.307176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.108 02:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:26.108 02:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:30:26.108 02:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:26.108 02:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:26.108 02:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:26.108 02:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:26.108 02:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 931508 00:30:26.108 02:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:30:26.108 02:56:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:30:27.047 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 931508 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 931374 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.047 { 00:30:27.047 "params": { 00:30:27.047 "name": "Nvme$subsystem", 00:30:27.047 "trtype": "$TEST_TRANSPORT", 00:30:27.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.047 "adrfam": "ipv4", 00:30:27.047 "trsvcid": "$NVMF_PORT", 00:30:27.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.047 "hdgst": ${hdgst:-false}, 00:30:27.047 "ddgst": ${ddgst:-false} 00:30:27.047 }, 00:30:27.047 "method": "bdev_nvme_attach_controller" 00:30:27.047 } 00:30:27.047 EOF 00:30:27.047 )") 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.047 { 00:30:27.047 "params": { 00:30:27.047 "name": "Nvme$subsystem", 00:30:27.047 "trtype": "$TEST_TRANSPORT", 00:30:27.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.047 "adrfam": "ipv4", 00:30:27.047 "trsvcid": "$NVMF_PORT", 00:30:27.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.047 "hdgst": ${hdgst:-false}, 00:30:27.047 "ddgst": ${ddgst:-false} 00:30:27.047 }, 00:30:27.047 "method": "bdev_nvme_attach_controller" 00:30:27.047 } 00:30:27.047 EOF 00:30:27.047 )") 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.047 { 00:30:27.047 "params": { 00:30:27.047 "name": "Nvme$subsystem", 00:30:27.047 "trtype": "$TEST_TRANSPORT", 00:30:27.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.047 "adrfam": "ipv4", 00:30:27.047 "trsvcid": "$NVMF_PORT", 00:30:27.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.047 "hdgst": ${hdgst:-false}, 00:30:27.047 "ddgst": ${ddgst:-false} 00:30:27.047 }, 00:30:27.047 "method": "bdev_nvme_attach_controller" 00:30:27.047 } 00:30:27.047 EOF 00:30:27.047 )") 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.047 { 00:30:27.047 "params": { 00:30:27.047 "name": "Nvme$subsystem", 00:30:27.047 "trtype": "$TEST_TRANSPORT", 00:30:27.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.047 "adrfam": "ipv4", 00:30:27.047 "trsvcid": "$NVMF_PORT", 00:30:27.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.047 "hdgst": ${hdgst:-false}, 00:30:27.047 "ddgst": ${ddgst:-false} 00:30:27.047 }, 00:30:27.047 "method": "bdev_nvme_attach_controller" 00:30:27.047 } 00:30:27.047 EOF 00:30:27.047 )") 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.047 { 00:30:27.047 "params": { 00:30:27.047 "name": "Nvme$subsystem", 00:30:27.047 "trtype": "$TEST_TRANSPORT", 00:30:27.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.047 "adrfam": "ipv4", 00:30:27.047 "trsvcid": "$NVMF_PORT", 00:30:27.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.047 "hdgst": ${hdgst:-false}, 00:30:27.047 "ddgst": ${ddgst:-false} 00:30:27.047 }, 00:30:27.047 "method": "bdev_nvme_attach_controller" 00:30:27.047 } 00:30:27.047 EOF 00:30:27.047 )") 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.047 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.047 { 00:30:27.047 "params": { 00:30:27.047 "name": "Nvme$subsystem", 00:30:27.047 "trtype": "$TEST_TRANSPORT", 00:30:27.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "$NVMF_PORT", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.048 "hdgst": ${hdgst:-false}, 00:30:27.048 "ddgst": ${ddgst:-false} 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 } 00:30:27.048 EOF 00:30:27.048 )") 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:27.048 [2024-05-15 02:56:30.236408] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:27.048 [2024-05-15 02:56:30.236489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931889 ] 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.048 { 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme$subsystem", 00:30:27.048 "trtype": "$TEST_TRANSPORT", 00:30:27.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "$NVMF_PORT", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.048 "hdgst": ${hdgst:-false}, 00:30:27.048 "ddgst": ${ddgst:-false} 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 } 00:30:27.048 EOF 00:30:27.048 )") 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.048 { 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme$subsystem", 00:30:27.048 "trtype": "$TEST_TRANSPORT", 00:30:27.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "$NVMF_PORT", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.048 "hdgst": ${hdgst:-false}, 00:30:27.048 "ddgst": ${ddgst:-false} 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 } 00:30:27.048 EOF 00:30:27.048 )") 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.048 { 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme$subsystem", 00:30:27.048 "trtype": "$TEST_TRANSPORT", 00:30:27.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "$NVMF_PORT", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.048 "hdgst": ${hdgst:-false}, 00:30:27.048 "ddgst": ${ddgst:-false} 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 } 00:30:27.048 EOF 00:30:27.048 )") 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.048 { 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme$subsystem", 00:30:27.048 "trtype": "$TEST_TRANSPORT", 00:30:27.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "$NVMF_PORT", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.048 "hdgst": ${hdgst:-false}, 00:30:27.048 "ddgst": ${ddgst:-false} 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 } 00:30:27.048 EOF 00:30:27.048 )") 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:27.048 02:56:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme1", 00:30:27.048 "trtype": "rdma", 00:30:27.048 "traddr": "192.168.100.8", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "4420", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:27.048 "hdgst": false, 00:30:27.048 "ddgst": false 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 },{ 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme2", 00:30:27.048 "trtype": "rdma", 00:30:27.048 "traddr": "192.168.100.8", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "4420", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:27.048 "hdgst": false, 00:30:27.048 "ddgst": false 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 },{ 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme3", 00:30:27.048 "trtype": "rdma", 00:30:27.048 "traddr": "192.168.100.8", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "4420", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:27.048 "hdgst": false, 00:30:27.048 "ddgst": false 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 },{ 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme4", 00:30:27.048 "trtype": "rdma", 00:30:27.048 "traddr": "192.168.100.8", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "4420", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:27.048 "hdgst": false, 00:30:27.048 "ddgst": false 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 },{ 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme5", 00:30:27.048 "trtype": "rdma", 00:30:27.048 "traddr": "192.168.100.8", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "4420", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:27.048 "hdgst": false, 00:30:27.048 "ddgst": false 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 },{ 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme6", 00:30:27.048 "trtype": "rdma", 00:30:27.048 "traddr": "192.168.100.8", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "4420", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:27.048 "hdgst": false, 00:30:27.048 "ddgst": false 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 },{ 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme7", 00:30:27.048 "trtype": "rdma", 00:30:27.048 "traddr": "192.168.100.8", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "4420", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:27.048 "hdgst": false, 00:30:27.048 "ddgst": false 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 },{ 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme8", 00:30:27.048 "trtype": "rdma", 00:30:27.048 "traddr": "192.168.100.8", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "4420", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:27.048 "hdgst": false, 00:30:27.048 "ddgst": false 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 },{ 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme9", 00:30:27.048 "trtype": "rdma", 00:30:27.048 "traddr": "192.168.100.8", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "4420", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:27.048 "hdgst": false, 00:30:27.048 "ddgst": false 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 },{ 00:30:27.048 "params": { 00:30:27.048 "name": "Nvme10", 00:30:27.048 "trtype": "rdma", 00:30:27.048 "traddr": "192.168.100.8", 00:30:27.048 "adrfam": "ipv4", 00:30:27.048 "trsvcid": "4420", 00:30:27.048 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:27.048 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:27.048 "hdgst": false, 00:30:27.048 "ddgst": false 00:30:27.048 }, 00:30:27.048 "method": "bdev_nvme_attach_controller" 00:30:27.048 }' 00:30:27.048 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.307 [2024-05-15 02:56:30.348976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.307 [2024-05-15 02:56:30.397613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.244 Running I/O for 1 seconds... 00:30:29.623 00:30:29.623 Latency(us) 00:30:29.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.623 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.623 Verification LBA range: start 0x0 length 0x400 00:30:29.623 Nvme1n1 : 1.24 257.47 16.09 0.00 0.00 244640.01 8947.09 255305.46 00:30:29.623 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.623 Verification LBA range: start 0x0 length 0x400 00:30:29.623 Nvme2n1 : 1.24 257.12 16.07 0.00 0.00 240131.47 11397.57 244363.80 00:30:29.623 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.623 Verification LBA range: start 0x0 length 0x400 00:30:29.623 Nvme3n1 : 1.25 256.78 16.05 0.00 0.00 235952.57 11853.47 235245.75 00:30:29.623 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.623 Verification LBA range: start 0x0 length 0x400 00:30:29.623 Nvme4n1 : 1.25 256.38 16.02 0.00 0.00 232339.01 12594.31 217921.45 00:30:29.623 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.623 Verification LBA range: start 0x0 length 0x400 00:30:29.623 Nvme5n1 : 1.25 255.98 16.00 0.00 0.00 228163.23 13449.13 202420.76 00:30:29.623 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.623 Verification LBA range: start 0x0 length 0x400 00:30:29.623 Nvme6n1 : 1.25 255.65 15.98 0.00 0.00 223336.89 13905.03 190567.29 00:30:29.623 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.623 Verification LBA range: start 0x0 length 0x400 00:30:29.623 Nvme7n1 : 1.25 255.20 15.95 0.00 0.00 220419.47 14816.83 170507.58 00:30:29.623 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.623 Verification LBA range: start 0x0 length 0x400 00:30:29.623 Nvme8n1 : 1.26 254.87 15.93 0.00 0.00 214797.80 15386.71 160477.72 00:30:29.623 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.623 Verification LBA range: start 0x0 length 0x400 00:30:29.623 Nvme9n1 : 1.26 254.42 15.90 0.00 0.00 212265.76 16412.49 147712.45 00:30:29.623 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.623 Verification LBA range: start 0x0 length 0x400 00:30:29.623 Nvme10n1 : 1.26 253.35 15.83 0.00 0.00 208140.29 4017.64 170507.58 00:30:29.623 =================================================================================================================== 00:30:29.623 Total : 2557.23 159.83 0.00 0.00 226018.65 4017.64 255305.46 00:30:29.623 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:30:29.623 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:29.623 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:29.623 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:29.623 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:29.623 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:29.623 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:30:29.623 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:29.623 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:29.623 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:30:29.623 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:29.623 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:29.623 rmmod nvme_rdma 00:30:29.883 rmmod nvme_fabrics 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 931374 ']' 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 931374 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # '[' -z 931374 ']' 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # kill -0 931374 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # uname 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 931374 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 931374' 00:30:29.883 killing process with pid 931374 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # kill 931374 00:30:29.883 [2024-05-15 02:56:32.998902] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:29.883 02:56:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # wait 931374 00:30:29.883 [2024-05-15 02:56:33.101244] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:30.451 00:30:30.451 real 0m12.939s 00:30:30.451 user 0m29.720s 00:30:30.451 sys 0m6.160s 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:30.451 ************************************ 00:30:30.451 END TEST nvmf_shutdown_tc1 00:30:30.451 ************************************ 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:30.451 ************************************ 00:30:30.451 START TEST nvmf_shutdown_tc2 00:30:30.451 ************************************ 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc2 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:30.451 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:30.451 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:30.451 Found net devices under 0000:18:00.0: mlx_0_0 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:30.451 Found net devices under 0000:18:00.1: mlx_0_1 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:30.451 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:30.452 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:30.452 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:30:30.452 altname enp24s0f0np0 00:30:30.452 altname ens785f0np0 00:30:30.452 inet 192.168.100.8/24 scope global mlx_0_0 00:30:30.452 valid_lft forever preferred_lft forever 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:30.452 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:30.452 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:30.452 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:30:30.452 altname enp24s0f1np1 00:30:30.452 altname ens785f1np1 00:30:30.452 inet 192.168.100.9/24 scope global mlx_0_1 00:30:30.452 valid_lft forever preferred_lft forever 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:30.712 192.168.100.9' 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:30.712 192.168.100.9' 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:30.712 192.168.100.9' 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=932367 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 932367 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 932367 ']' 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:30.712 02:56:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.712 [2024-05-15 02:56:33.906996] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:30.712 [2024-05-15 02:56:33.907073] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.712 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.972 [2024-05-15 02:56:34.009521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:30.972 [2024-05-15 02:56:34.061399] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.972 [2024-05-15 02:56:34.061449] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.972 [2024-05-15 02:56:34.061464] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.972 [2024-05-15 02:56:34.061477] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.972 [2024-05-15 02:56:34.061488] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.972 [2024-05-15 02:56:34.061603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:30.972 [2024-05-15 02:56:34.061713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.972 [2024-05-15 02:56:34.061817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.972 [2024-05-15 02:56:34.061817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:30.972 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:30.972 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:30:30.972 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:30.972 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:30.972 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.972 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.972 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:30.972 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:30.972 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.232 [2024-05-15 02:56:34.261955] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x72b060/0x72f550) succeed. 00:30:31.232 [2024-05-15 02:56:34.277021] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x72c6a0/0x770be0) succeed. 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.232 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.232 Malloc1 00:30:31.491 [2024-05-15 02:56:34.541530] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:31.491 [2024-05-15 02:56:34.541946] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:31.491 Malloc2 00:30:31.491 Malloc3 00:30:31.491 Malloc4 00:30:31.491 Malloc5 00:30:31.491 Malloc6 00:30:31.751 Malloc7 00:30:31.751 Malloc8 00:30:31.751 Malloc9 00:30:31.751 Malloc10 00:30:31.751 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.751 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:31.751 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:31.751 02:56:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=932595 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 932595 /var/tmp/bdevperf.sock 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 932595 ']' 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:31.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.751 { 00:30:31.751 "params": { 00:30:31.751 "name": "Nvme$subsystem", 00:30:31.751 "trtype": "$TEST_TRANSPORT", 00:30:31.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.751 "adrfam": "ipv4", 00:30:31.751 "trsvcid": "$NVMF_PORT", 00:30:31.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.751 "hdgst": ${hdgst:-false}, 00:30:31.751 "ddgst": ${ddgst:-false} 00:30:31.751 }, 00:30:31.751 "method": "bdev_nvme_attach_controller" 00:30:31.751 } 00:30:31.751 EOF 00:30:31.751 )") 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.751 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.751 { 00:30:31.751 "params": { 00:30:31.751 "name": "Nvme$subsystem", 00:30:31.751 "trtype": "$TEST_TRANSPORT", 00:30:31.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.752 "adrfam": "ipv4", 00:30:31.752 "trsvcid": "$NVMF_PORT", 00:30:31.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.752 "hdgst": ${hdgst:-false}, 00:30:31.752 "ddgst": ${ddgst:-false} 00:30:31.752 }, 00:30:31.752 "method": "bdev_nvme_attach_controller" 00:30:31.752 } 00:30:31.752 EOF 00:30:31.752 )") 00:30:31.752 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.752 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.752 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.752 { 00:30:31.752 "params": { 00:30:31.752 "name": "Nvme$subsystem", 00:30:31.752 "trtype": "$TEST_TRANSPORT", 00:30:31.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.752 "adrfam": "ipv4", 00:30:31.752 "trsvcid": "$NVMF_PORT", 00:30:31.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.752 "hdgst": ${hdgst:-false}, 00:30:31.752 "ddgst": ${ddgst:-false} 00:30:31.752 }, 00:30:31.752 "method": "bdev_nvme_attach_controller" 00:30:31.752 } 00:30:31.752 EOF 00:30:31.752 )") 00:30:31.752 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:31.752 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.752 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.752 { 00:30:31.752 "params": { 00:30:31.752 "name": "Nvme$subsystem", 00:30:31.752 "trtype": "$TEST_TRANSPORT", 00:30:31.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.752 "adrfam": "ipv4", 00:30:31.752 "trsvcid": "$NVMF_PORT", 00:30:31.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.752 "hdgst": ${hdgst:-false}, 00:30:31.752 "ddgst": ${ddgst:-false} 00:30:31.752 }, 00:30:31.752 "method": "bdev_nvme_attach_controller" 00:30:31.752 } 00:30:31.752 EOF 00:30:31.752 )") 00:30:31.752 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:32.011 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.011 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.011 { 00:30:32.011 "params": { 00:30:32.011 "name": "Nvme$subsystem", 00:30:32.011 "trtype": "$TEST_TRANSPORT", 00:30:32.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.011 "adrfam": "ipv4", 00:30:32.011 "trsvcid": "$NVMF_PORT", 00:30:32.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.011 "hdgst": ${hdgst:-false}, 00:30:32.011 "ddgst": ${ddgst:-false} 00:30:32.011 }, 00:30:32.011 "method": "bdev_nvme_attach_controller" 00:30:32.011 } 00:30:32.011 EOF 00:30:32.011 )") 00:30:32.011 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:32.011 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.011 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.011 { 00:30:32.011 "params": { 00:30:32.011 "name": "Nvme$subsystem", 00:30:32.012 "trtype": "$TEST_TRANSPORT", 00:30:32.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "$NVMF_PORT", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.012 "hdgst": ${hdgst:-false}, 00:30:32.012 "ddgst": ${ddgst:-false} 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 } 00:30:32.012 EOF 00:30:32.012 )") 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:32.012 [2024-05-15 02:56:35.052684] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:32.012 [2024-05-15 02:56:35.052754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid932595 ] 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.012 { 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme$subsystem", 00:30:32.012 "trtype": "$TEST_TRANSPORT", 00:30:32.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "$NVMF_PORT", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.012 "hdgst": ${hdgst:-false}, 00:30:32.012 "ddgst": ${ddgst:-false} 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 } 00:30:32.012 EOF 00:30:32.012 )") 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.012 { 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme$subsystem", 00:30:32.012 "trtype": "$TEST_TRANSPORT", 00:30:32.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "$NVMF_PORT", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.012 "hdgst": ${hdgst:-false}, 00:30:32.012 "ddgst": ${ddgst:-false} 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 } 00:30:32.012 EOF 00:30:32.012 )") 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.012 { 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme$subsystem", 00:30:32.012 "trtype": "$TEST_TRANSPORT", 00:30:32.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "$NVMF_PORT", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.012 "hdgst": ${hdgst:-false}, 00:30:32.012 "ddgst": ${ddgst:-false} 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 } 00:30:32.012 EOF 00:30:32.012 )") 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.012 { 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme$subsystem", 00:30:32.012 "trtype": "$TEST_TRANSPORT", 00:30:32.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "$NVMF_PORT", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.012 "hdgst": ${hdgst:-false}, 00:30:32.012 "ddgst": ${ddgst:-false} 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 } 00:30:32.012 EOF 00:30:32.012 )") 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:30:32.012 02:56:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme1", 00:30:32.012 "trtype": "rdma", 00:30:32.012 "traddr": "192.168.100.8", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "4420", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:32.012 "hdgst": false, 00:30:32.012 "ddgst": false 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 },{ 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme2", 00:30:32.012 "trtype": "rdma", 00:30:32.012 "traddr": "192.168.100.8", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "4420", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:32.012 "hdgst": false, 00:30:32.012 "ddgst": false 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 },{ 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme3", 00:30:32.012 "trtype": "rdma", 00:30:32.012 "traddr": "192.168.100.8", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "4420", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:32.012 "hdgst": false, 00:30:32.012 "ddgst": false 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 },{ 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme4", 00:30:32.012 "trtype": "rdma", 00:30:32.012 "traddr": "192.168.100.8", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "4420", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:32.012 "hdgst": false, 00:30:32.012 "ddgst": false 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 },{ 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme5", 00:30:32.012 "trtype": "rdma", 00:30:32.012 "traddr": "192.168.100.8", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "4420", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:32.012 "hdgst": false, 00:30:32.012 "ddgst": false 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 },{ 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme6", 00:30:32.012 "trtype": "rdma", 00:30:32.012 "traddr": "192.168.100.8", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "4420", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:32.012 "hdgst": false, 00:30:32.012 "ddgst": false 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 },{ 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme7", 00:30:32.012 "trtype": "rdma", 00:30:32.012 "traddr": "192.168.100.8", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "4420", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:32.012 "hdgst": false, 00:30:32.012 "ddgst": false 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 },{ 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme8", 00:30:32.012 "trtype": "rdma", 00:30:32.012 "traddr": "192.168.100.8", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "4420", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:32.012 "hdgst": false, 00:30:32.012 "ddgst": false 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 },{ 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme9", 00:30:32.012 "trtype": "rdma", 00:30:32.012 "traddr": "192.168.100.8", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "4420", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:32.012 "hdgst": false, 00:30:32.012 "ddgst": false 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 },{ 00:30:32.012 "params": { 00:30:32.012 "name": "Nvme10", 00:30:32.012 "trtype": "rdma", 00:30:32.012 "traddr": "192.168.100.8", 00:30:32.012 "adrfam": "ipv4", 00:30:32.012 "trsvcid": "4420", 00:30:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:32.012 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:32.012 "hdgst": false, 00:30:32.012 "ddgst": false 00:30:32.012 }, 00:30:32.012 "method": "bdev_nvme_attach_controller" 00:30:32.012 }' 00:30:32.012 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.012 [2024-05-15 02:56:35.161490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.012 [2024-05-15 02:56:35.208923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.951 Running I/O for 10 seconds... 00:30:32.951 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:32.951 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:30:32.951 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:32.951 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:32.951 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.210 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:33.210 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:33.210 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:33.210 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:33.210 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:30:33.210 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:30:33.210 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:33.210 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:33.210 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:33.210 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:33.210 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:33.210 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.514 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:33.514 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:30:33.514 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:30:33.514 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:33.806 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:33.806 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:33.806 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:33.806 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:33.806 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:33.806 02:56:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 932595 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 932595 ']' 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 932595 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 932595 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 932595' 00:30:33.806 killing process with pid 932595 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 932595 00:30:33.806 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 932595 00:30:34.065 Received shutdown signal, test time was about 1.019421 seconds 00:30:34.065 00:30:34.065 Latency(us) 00:30:34.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.065 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.065 Verification LBA range: start 0x0 length 0x400 00:30:34.065 Nvme1n1 : 1.01 253.24 15.83 0.00 0.00 247394.39 16412.49 282659.62 00:30:34.065 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.065 Verification LBA range: start 0x0 length 0x400 00:30:34.065 Nvme2n1 : 1.01 252.68 15.79 0.00 0.00 243264.56 17552.25 258952.68 00:30:34.065 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.065 Verification LBA range: start 0x0 length 0x400 00:30:34.065 Nvme3n1 : 1.01 283.82 17.74 0.00 0.00 209580.00 6297.15 217009.64 00:30:34.065 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.065 Verification LBA range: start 0x0 length 0x400 00:30:34.065 Nvme4n1 : 1.02 270.65 16.92 0.00 0.00 213751.60 17666.23 212450.62 00:30:34.065 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.065 Verification LBA range: start 0x0 length 0x400 00:30:34.065 Nvme5n1 : 1.00 256.18 16.01 0.00 0.00 223038.78 18464.06 196038.12 00:30:34.065 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.065 Verification LBA range: start 0x0 length 0x400 00:30:34.065 Nvme6n1 : 1.00 255.59 15.97 0.00 0.00 218051.01 19717.79 173242.99 00:30:34.065 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.065 Verification LBA range: start 0x0 length 0x400 00:30:34.065 Nvme7n1 : 1.02 269.31 16.83 0.00 0.00 199838.42 13506.11 157742.30 00:30:34.065 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.065 Verification LBA range: start 0x0 length 0x400 00:30:34.065 Nvme8n1 : 1.02 253.33 15.83 0.00 0.00 206138.88 13050.21 142241.61 00:30:34.065 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.065 Verification LBA range: start 0x0 length 0x400 00:30:34.065 Nvme9n1 : 1.01 254.31 15.89 0.00 0.00 202341.73 14474.91 148624.25 00:30:34.065 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:34.065 Verification LBA range: start 0x0 length 0x400 00:30:34.066 Nvme10n1 : 1.01 190.31 11.89 0.00 0.00 262680.64 15614.66 291777.67 00:30:34.066 =================================================================================================================== 00:30:34.066 Total : 2539.40 158.71 0.00 0.00 221208.46 6297.15 291777.67 00:30:34.325 02:56:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:30:35.262 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 932367 00:30:35.262 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:30:35.262 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:35.262 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:35.262 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:35.262 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:35.262 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:35.262 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:30:35.262 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:35.263 rmmod nvme_rdma 00:30:35.263 rmmod nvme_fabrics 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 932367 ']' 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 932367 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 932367 ']' 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 932367 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:35.263 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 932367 00:30:35.523 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:30:35.523 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:30:35.523 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 932367' 00:30:35.523 killing process with pid 932367 00:30:35.523 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 932367 00:30:35.523 [2024-05-15 02:56:38.558239] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:35.523 02:56:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 932367 00:30:35.523 [2024-05-15 02:56:38.663833] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:30:35.783 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:35.783 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:35.783 00:30:35.783 real 0m5.474s 00:30:35.783 user 0m21.835s 00:30:35.783 sys 0m1.271s 00:30:35.783 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:35.783 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.783 ************************************ 00:30:35.783 END TEST nvmf_shutdown_tc2 00:30:35.783 ************************************ 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:36.043 ************************************ 00:30:36.043 START TEST nvmf_shutdown_tc3 00:30:36.043 ************************************ 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc3 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:30:36.043 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:36.044 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:36.044 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:36.044 Found net devices under 0000:18:00.0: mlx_0_0 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:36.044 Found net devices under 0000:18:00.1: mlx_0_1 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:36.044 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:36.044 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:30:36.044 altname enp24s0f0np0 00:30:36.044 altname ens785f0np0 00:30:36.044 inet 192.168.100.8/24 scope global mlx_0_0 00:30:36.044 valid_lft forever preferred_lft forever 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:36.044 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:36.045 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:36.045 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:30:36.045 altname enp24s0f1np1 00:30:36.045 altname ens785f1np1 00:30:36.045 inet 192.168.100.9/24 scope global mlx_0_1 00:30:36.045 valid_lft forever preferred_lft forever 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:36.045 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:36.305 192.168.100.9' 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:36.305 192.168.100.9' 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:36.305 192.168.100.9' 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=933258 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 933258 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 933258 ']' 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:36.305 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:36.305 [2024-05-15 02:56:39.456715] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:36.305 [2024-05-15 02:56:39.456787] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.305 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.305 [2024-05-15 02:56:39.559753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:36.565 [2024-05-15 02:56:39.611542] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.565 [2024-05-15 02:56:39.611590] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.565 [2024-05-15 02:56:39.611605] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.565 [2024-05-15 02:56:39.611618] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.565 [2024-05-15 02:56:39.611628] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.565 [2024-05-15 02:56:39.611733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:36.565 [2024-05-15 02:56:39.611835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:36.565 [2024-05-15 02:56:39.611938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:36.565 [2024-05-15 02:56:39.611939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.565 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:36.565 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:30:36.565 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:36.565 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:36.565 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:36.565 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.565 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:36.565 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:36.565 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:36.565 [2024-05-15 02:56:39.814208] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a8c060/0x1a90550) succeed. 00:30:36.565 [2024-05-15 02:56:39.829215] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a8d6a0/0x1ad1be0) succeed. 00:30:36.825 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:36.825 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:36.825 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:36.825 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:36.825 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:36.825 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:36.825 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:36.825 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:36.825 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:36.825 02:56:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:36.825 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:36.825 Malloc1 00:30:36.825 [2024-05-15 02:56:40.096347] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:36.825 [2024-05-15 02:56:40.096808] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:37.085 Malloc2 00:30:37.085 Malloc3 00:30:37.085 Malloc4 00:30:37.085 Malloc5 00:30:37.085 Malloc6 00:30:37.085 Malloc7 00:30:37.345 Malloc8 00:30:37.345 Malloc9 00:30:37.345 Malloc10 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=933485 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 933485 /var/tmp/bdevperf.sock 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 933485 ']' 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:37.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:37.345 { 00:30:37.345 "params": { 00:30:37.345 "name": "Nvme$subsystem", 00:30:37.345 "trtype": "$TEST_TRANSPORT", 00:30:37.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.345 "adrfam": "ipv4", 00:30:37.345 "trsvcid": "$NVMF_PORT", 00:30:37.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.345 "hdgst": ${hdgst:-false}, 00:30:37.345 "ddgst": ${ddgst:-false} 00:30:37.345 }, 00:30:37.345 "method": "bdev_nvme_attach_controller" 00:30:37.345 } 00:30:37.345 EOF 00:30:37.345 )") 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:37.345 { 00:30:37.345 "params": { 00:30:37.345 "name": "Nvme$subsystem", 00:30:37.345 "trtype": "$TEST_TRANSPORT", 00:30:37.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.345 "adrfam": "ipv4", 00:30:37.345 "trsvcid": "$NVMF_PORT", 00:30:37.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.345 "hdgst": ${hdgst:-false}, 00:30:37.345 "ddgst": ${ddgst:-false} 00:30:37.345 }, 00:30:37.345 "method": "bdev_nvme_attach_controller" 00:30:37.345 } 00:30:37.345 EOF 00:30:37.345 )") 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:37.345 { 00:30:37.345 "params": { 00:30:37.345 "name": "Nvme$subsystem", 00:30:37.345 "trtype": "$TEST_TRANSPORT", 00:30:37.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.345 "adrfam": "ipv4", 00:30:37.345 "trsvcid": "$NVMF_PORT", 00:30:37.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.345 "hdgst": ${hdgst:-false}, 00:30:37.345 "ddgst": ${ddgst:-false} 00:30:37.345 }, 00:30:37.345 "method": "bdev_nvme_attach_controller" 00:30:37.345 } 00:30:37.345 EOF 00:30:37.345 )") 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:37.345 { 00:30:37.345 "params": { 00:30:37.345 "name": "Nvme$subsystem", 00:30:37.345 "trtype": "$TEST_TRANSPORT", 00:30:37.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.345 "adrfam": "ipv4", 00:30:37.345 "trsvcid": "$NVMF_PORT", 00:30:37.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.345 "hdgst": ${hdgst:-false}, 00:30:37.345 "ddgst": ${ddgst:-false} 00:30:37.345 }, 00:30:37.345 "method": "bdev_nvme_attach_controller" 00:30:37.345 } 00:30:37.345 EOF 00:30:37.345 )") 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:37.345 { 00:30:37.345 "params": { 00:30:37.345 "name": "Nvme$subsystem", 00:30:37.345 "trtype": "$TEST_TRANSPORT", 00:30:37.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.345 "adrfam": "ipv4", 00:30:37.345 "trsvcid": "$NVMF_PORT", 00:30:37.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.345 "hdgst": ${hdgst:-false}, 00:30:37.345 "ddgst": ${ddgst:-false} 00:30:37.345 }, 00:30:37.345 "method": "bdev_nvme_attach_controller" 00:30:37.345 } 00:30:37.345 EOF 00:30:37.345 )") 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:37.345 { 00:30:37.345 "params": { 00:30:37.345 "name": "Nvme$subsystem", 00:30:37.345 "trtype": "$TEST_TRANSPORT", 00:30:37.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.345 "adrfam": "ipv4", 00:30:37.345 "trsvcid": "$NVMF_PORT", 00:30:37.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.345 "hdgst": ${hdgst:-false}, 00:30:37.345 "ddgst": ${ddgst:-false} 00:30:37.345 }, 00:30:37.345 "method": "bdev_nvme_attach_controller" 00:30:37.345 } 00:30:37.345 EOF 00:30:37.345 )") 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:37.345 [2024-05-15 02:56:40.622425] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:37.345 [2024-05-15 02:56:40.622498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid933485 ] 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:37.345 { 00:30:37.345 "params": { 00:30:37.345 "name": "Nvme$subsystem", 00:30:37.345 "trtype": "$TEST_TRANSPORT", 00:30:37.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.345 "adrfam": "ipv4", 00:30:37.345 "trsvcid": "$NVMF_PORT", 00:30:37.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.345 "hdgst": ${hdgst:-false}, 00:30:37.345 "ddgst": ${ddgst:-false} 00:30:37.345 }, 00:30:37.345 "method": "bdev_nvme_attach_controller" 00:30:37.345 } 00:30:37.345 EOF 00:30:37.345 )") 00:30:37.345 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:37.604 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:37.604 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:37.604 { 00:30:37.604 "params": { 00:30:37.604 "name": "Nvme$subsystem", 00:30:37.604 "trtype": "$TEST_TRANSPORT", 00:30:37.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.604 "adrfam": "ipv4", 00:30:37.604 "trsvcid": "$NVMF_PORT", 00:30:37.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.604 "hdgst": ${hdgst:-false}, 00:30:37.604 "ddgst": ${ddgst:-false} 00:30:37.604 }, 00:30:37.604 "method": "bdev_nvme_attach_controller" 00:30:37.604 } 00:30:37.604 EOF 00:30:37.604 )") 00:30:37.604 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:37.604 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:37.604 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:37.604 { 00:30:37.604 "params": { 00:30:37.604 "name": "Nvme$subsystem", 00:30:37.604 "trtype": "$TEST_TRANSPORT", 00:30:37.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.604 "adrfam": "ipv4", 00:30:37.604 "trsvcid": "$NVMF_PORT", 00:30:37.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.604 "hdgst": ${hdgst:-false}, 00:30:37.604 "ddgst": ${ddgst:-false} 00:30:37.604 }, 00:30:37.604 "method": "bdev_nvme_attach_controller" 00:30:37.604 } 00:30:37.604 EOF 00:30:37.604 )") 00:30:37.604 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:37.604 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:37.604 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:37.604 { 00:30:37.604 "params": { 00:30:37.604 "name": "Nvme$subsystem", 00:30:37.604 "trtype": "$TEST_TRANSPORT", 00:30:37.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:37.604 "adrfam": "ipv4", 00:30:37.604 "trsvcid": "$NVMF_PORT", 00:30:37.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:37.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:37.604 "hdgst": ${hdgst:-false}, 00:30:37.604 "ddgst": ${ddgst:-false} 00:30:37.604 }, 00:30:37.604 "method": "bdev_nvme_attach_controller" 00:30:37.604 } 00:30:37.604 EOF 00:30:37.604 )") 00:30:37.604 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:37.604 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:30:37.604 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:30:37.604 02:56:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:37.604 "params": { 00:30:37.604 "name": "Nvme1", 00:30:37.604 "trtype": "rdma", 00:30:37.604 "traddr": "192.168.100.8", 00:30:37.605 "adrfam": "ipv4", 00:30:37.605 "trsvcid": "4420", 00:30:37.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:37.605 "hdgst": false, 00:30:37.605 "ddgst": false 00:30:37.605 }, 00:30:37.605 "method": "bdev_nvme_attach_controller" 00:30:37.605 },{ 00:30:37.605 "params": { 00:30:37.605 "name": "Nvme2", 00:30:37.605 "trtype": "rdma", 00:30:37.605 "traddr": "192.168.100.8", 00:30:37.605 "adrfam": "ipv4", 00:30:37.605 "trsvcid": "4420", 00:30:37.605 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:37.605 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:37.605 "hdgst": false, 00:30:37.605 "ddgst": false 00:30:37.605 }, 00:30:37.605 "method": "bdev_nvme_attach_controller" 00:30:37.605 },{ 00:30:37.605 "params": { 00:30:37.605 "name": "Nvme3", 00:30:37.605 "trtype": "rdma", 00:30:37.605 "traddr": "192.168.100.8", 00:30:37.605 "adrfam": "ipv4", 00:30:37.605 "trsvcid": "4420", 00:30:37.605 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:37.605 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:37.605 "hdgst": false, 00:30:37.605 "ddgst": false 00:30:37.605 }, 00:30:37.605 "method": "bdev_nvme_attach_controller" 00:30:37.605 },{ 00:30:37.605 "params": { 00:30:37.605 "name": "Nvme4", 00:30:37.605 "trtype": "rdma", 00:30:37.605 "traddr": "192.168.100.8", 00:30:37.605 "adrfam": "ipv4", 00:30:37.605 "trsvcid": "4420", 00:30:37.605 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:37.605 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:37.605 "hdgst": false, 00:30:37.605 "ddgst": false 00:30:37.605 }, 00:30:37.605 "method": "bdev_nvme_attach_controller" 00:30:37.605 },{ 00:30:37.605 "params": { 00:30:37.605 "name": "Nvme5", 00:30:37.605 "trtype": "rdma", 00:30:37.605 "traddr": "192.168.100.8", 00:30:37.605 "adrfam": "ipv4", 00:30:37.605 "trsvcid": "4420", 00:30:37.605 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:37.605 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:37.605 "hdgst": false, 00:30:37.605 "ddgst": false 00:30:37.605 }, 00:30:37.605 "method": "bdev_nvme_attach_controller" 00:30:37.605 },{ 00:30:37.605 "params": { 00:30:37.605 "name": "Nvme6", 00:30:37.605 "trtype": "rdma", 00:30:37.605 "traddr": "192.168.100.8", 00:30:37.605 "adrfam": "ipv4", 00:30:37.605 "trsvcid": "4420", 00:30:37.605 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:37.605 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:37.605 "hdgst": false, 00:30:37.605 "ddgst": false 00:30:37.605 }, 00:30:37.605 "method": "bdev_nvme_attach_controller" 00:30:37.605 },{ 00:30:37.605 "params": { 00:30:37.605 "name": "Nvme7", 00:30:37.605 "trtype": "rdma", 00:30:37.605 "traddr": "192.168.100.8", 00:30:37.605 "adrfam": "ipv4", 00:30:37.605 "trsvcid": "4420", 00:30:37.605 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:37.605 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:37.605 "hdgst": false, 00:30:37.605 "ddgst": false 00:30:37.605 }, 00:30:37.605 "method": "bdev_nvme_attach_controller" 00:30:37.605 },{ 00:30:37.605 "params": { 00:30:37.605 "name": "Nvme8", 00:30:37.605 "trtype": "rdma", 00:30:37.605 "traddr": "192.168.100.8", 00:30:37.605 "adrfam": "ipv4", 00:30:37.605 "trsvcid": "4420", 00:30:37.605 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:37.605 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:37.605 "hdgst": false, 00:30:37.605 "ddgst": false 00:30:37.605 }, 00:30:37.605 "method": "bdev_nvme_attach_controller" 00:30:37.605 },{ 00:30:37.605 "params": { 00:30:37.605 "name": "Nvme9", 00:30:37.605 "trtype": "rdma", 00:30:37.605 "traddr": "192.168.100.8", 00:30:37.605 "adrfam": "ipv4", 00:30:37.605 "trsvcid": "4420", 00:30:37.605 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:37.605 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:37.605 "hdgst": false, 00:30:37.605 "ddgst": false 00:30:37.605 }, 00:30:37.605 "method": "bdev_nvme_attach_controller" 00:30:37.605 },{ 00:30:37.605 "params": { 00:30:37.605 "name": "Nvme10", 00:30:37.605 "trtype": "rdma", 00:30:37.605 "traddr": "192.168.100.8", 00:30:37.605 "adrfam": "ipv4", 00:30:37.605 "trsvcid": "4420", 00:30:37.605 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:37.605 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:37.605 "hdgst": false, 00:30:37.605 "ddgst": false 00:30:37.605 }, 00:30:37.605 "method": "bdev_nvme_attach_controller" 00:30:37.605 }' 00:30:37.605 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.605 [2024-05-15 02:56:40.731768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.605 [2024-05-15 02:56:40.779117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.538 Running I/O for 10 seconds... 00:30:38.538 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:38.538 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:30:38.538 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:38.538 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:38.538 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:38.797 02:56:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:38.797 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:39.056 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:30:39.056 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:30:39.056 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:39.056 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:39.056 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 933258 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # '[' -z 933258 ']' 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # kill -0 933258 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # uname 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:39.316 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 933258 00:30:39.575 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:30:39.575 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:30:39.575 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 933258' 00:30:39.575 killing process with pid 933258 00:30:39.575 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # kill 933258 00:30:39.575 [2024-05-15 02:56:42.633592] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:39.575 02:56:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # wait 933258 00:30:39.575 [2024-05-15 02:56:42.769847] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:30:40.142 02:56:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:30:40.142 02:56:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:30:40.400 [2024-05-15 02:56:43.649658] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:30:40.400 [2024-05-15 02:56:43.651721] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:30:40.400 [2024-05-15 02:56:43.654007] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:30:40.400 [2024-05-15 02:56:43.654284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.400 [2024-05-15 02:56:43.654331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.400 [2024-05-15 02:56:43.654369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.400 [2024-05-15 02:56:43.654401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.400 [2024-05-15 02:56:43.654435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.400 [2024-05-15 02:56:43.654466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.400 [2024-05-15 02:56:43.654499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.400 [2024-05-15 02:56:43.654531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.400 [2024-05-15 02:56:43.656525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:40.400 [2024-05-15 02:56:43.656568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:40.400 [2024-05-15 02:56:43.656599] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.400 [2024-05-15 02:56:43.656644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.400 [2024-05-15 02:56:43.656677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:0 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.400 [2024-05-15 02:56:43.656710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.400 [2024-05-15 02:56:43.656741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:0 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.400 [2024-05-15 02:56:43.656774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.400 [2024-05-15 02:56:43.656806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:0 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.400 [2024-05-15 02:56:43.656838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.400 [2024-05-15 02:56:43.656870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:0 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.400 [2024-05-15 02:56:43.659494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:40.400 [2024-05-15 02:56:43.659536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:40.400 [2024-05-15 02:56:43.659566] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.400 [2024-05-15 02:56:43.659685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.400 [2024-05-15 02:56:43.659719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.400 [2024-05-15 02:56:43.659752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.400 [2024-05-15 02:56:43.659784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.400 [2024-05-15 02:56:43.659831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.400 [2024-05-15 02:56:43.659864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.400 [2024-05-15 02:56:43.659908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.400 [2024-05-15 02:56:43.659941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.400 [2024-05-15 02:56:43.662450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:40.400 [2024-05-15 02:56:43.662490] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:40.400 [2024-05-15 02:56:43.662744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:40.400 [2024-05-15 02:56:43.662790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:40.400 [2024-05-15 02:56:43.662826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:40.400 [2024-05-15 02:56:43.666920] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.400 [2024-05-15 02:56:43.667127] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:40.400 [2024-05-15 02:56:43.667162] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:40.400 [2024-05-15 02:56:43.667187] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:30:40.400 [2024-05-15 02:56:43.667404] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:40.400 [2024-05-15 02:56:43.667440] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:40.400 [2024-05-15 02:56:43.667466] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:30:40.400 [2024-05-15 02:56:43.676567] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.400 [2024-05-15 02:56:43.686568] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.674 [2024-05-15 02:56:43.696599] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.674 [2024-05-15 02:56:43.706653] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.674 [2024-05-15 02:56:43.716685] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.674 [2024-05-15 02:56:43.726741] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.674 [2024-05-15 02:56:43.736769] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.674 [2024-05-15 02:56:43.746814] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.674 [2024-05-15 02:56:43.756861] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.674 [2024-05-15 02:56:43.766937] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.674 [2024-05-15 02:56:43.776947] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.674 [2024-05-15 02:56:43.778661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010470000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.778684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.778734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010491000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.778750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.778771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104b2000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.778786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.778806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104d3000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.778821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.778841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104f4000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.778855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.778875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010515000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.778889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.778915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.778929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.778950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.778965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.778984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.778998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.779018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010599000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.779033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.779053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105ba000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.779067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.779087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105db000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.779102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.779125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105fc000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.779139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.779160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001061d000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.779175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.779195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.779210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.779230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001065f000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.779244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.779264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f873000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.779279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.779299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f852000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.779313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.779334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001212f000 len:0x10000 key:0x182800 00:30:40.674 [2024-05-15 02:56:43.779350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.674 [2024-05-15 02:56:43.779370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001210e000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ed000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120cc000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ab000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001208a000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012069000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012048000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012027000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012006000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fe5000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fc4000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fa3000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f82000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001254f000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001252e000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001250d000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124ec000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.779973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124cb000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.779987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124aa000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012489000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012468000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012447000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012426000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012405000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123e4000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123c3000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fddd000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdbc000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd9b000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd7a000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd59000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd38000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd17000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcf6000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcd5000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcb4000 len:0x10000 key:0x182800 00:30:40.675 [2024-05-15 02:56:43.780692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.675 [2024-05-15 02:56:43.780712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc93000 len:0x10000 key:0x182800 00:30:40.676 [2024-05-15 02:56:43.780728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.780747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc72000 len:0x10000 key:0x182800 00:30:40.676 [2024-05-15 02:56:43.780762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.780782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc51000 len:0x10000 key:0x182800 00:30:40.676 [2024-05-15 02:56:43.780796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.780815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc30000 len:0x10000 key:0x182800 00:30:40.676 [2024-05-15 02:56:43.780830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.780849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123a2000 len:0x10000 key:0x182800 00:30:40.676 [2024-05-15 02:56:43.780863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.780883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012381000 len:0x10000 key:0x182800 00:30:40.676 [2024-05-15 02:56:43.780905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.780925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012360000 len:0x10000 key:0x182800 00:30:40.676 [2024-05-15 02:56:43.780939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:b580 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.784293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:40.676 [2024-05-15 02:56:43.786471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:40.676 [2024-05-15 02:56:43.786617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfd80 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.786657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.786707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.786722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.786740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.786755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.786772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.786786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.786807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.786821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.786838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.786852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.786869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.786883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.786907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.786922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.786939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001965f980 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.786953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.786969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001964f900 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.786983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001963f880 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.787014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.787045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001961f780 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.787077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001960f700 len:0x10000 key:0x183500 00:30:40.676 [2024-05-15 02:56:43.787108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194df780 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194cf700 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194bf680 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194af600 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001949f580 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001946f400 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001945f380 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001944f300 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001942f200 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001941f180 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x183000 00:30:40.676 [2024-05-15 02:56:43.787547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x183f00 00:30:40.676 [2024-05-15 02:56:43.787577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.676 [2024-05-15 02:56:43.787594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199dff80 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.787624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cff00 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.787655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfe80 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.787685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199afe00 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.787716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001999fd80 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.787746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001998fd00 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.787777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001997fc80 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.787806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001996fc00 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.787837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995fb80 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.787872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001994fb00 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.787908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001993fa80 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.787939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992fa00 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.787970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.787984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001990f900 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ff880 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198bf680 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001988f500 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986f400 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985f380 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001984f300 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001983f280 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001981f180 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001980f100 len:0x10000 key:0x183f00 00:30:40.677 [2024-05-15 02:56:43.788505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x181e00 00:30:40.677 [2024-05-15 02:56:43.788535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x181e00 00:30:40.677 [2024-05-15 02:56:43.788566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x181e00 00:30:40.677 [2024-05-15 02:56:43.788596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x181e00 00:30:40.677 [2024-05-15 02:56:43.788626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.788642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efe00 len:0x10000 key:0x183500 00:30:40.677 [2024-05-15 02:56:43.788656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32741 cdw0:6f6d2e sqhd:4b85 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.790478] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:30:40.677 [2024-05-15 02:56:43.790537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a9fb80 len:0x10000 key:0x181e00 00:30:40.677 [2024-05-15 02:56:43.790570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.677 [2024-05-15 02:56:43.790613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a8fb00 len:0x10000 key:0x181e00 00:30:40.677 [2024-05-15 02:56:43.790645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.790692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a7fa80 len:0x10000 key:0x181e00 00:30:40.678 [2024-05-15 02:56:43.790707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.790723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a6fa00 len:0x10000 key:0x181e00 00:30:40.678 [2024-05-15 02:56:43.790736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.790753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a5f980 len:0x10000 key:0x181e00 00:30:40.678 [2024-05-15 02:56:43.790767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.790785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a4f900 len:0x10000 key:0x181e00 00:30:40.678 [2024-05-15 02:56:43.790799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.790816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a3f880 len:0x10000 key:0x181e00 00:30:40.678 [2024-05-15 02:56:43.790834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.790850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a2f800 len:0x10000 key:0x181e00 00:30:40.678 [2024-05-15 02:56:43.790864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.790881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a1f780 len:0x10000 key:0x181e00 00:30:40.678 [2024-05-15 02:56:43.790902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.790919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a0f700 len:0x10000 key:0x181e00 00:30:40.678 [2024-05-15 02:56:43.790933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.790950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019df0000 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.790963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.790982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.790996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcff00 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dbfe80 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafe00 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d9fd80 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d6fc00 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4fb00 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef800 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cdf780 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cbf680 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019caf600 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182a00 00:30:40.678 [2024-05-15 02:56:43.791611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.678 [2024-05-15 02:56:43.791628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182a00 00:30:40.679 [2024-05-15 02:56:43.791641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.791658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c7f480 len:0x10000 key:0x182a00 00:30:40.679 [2024-05-15 02:56:43.791671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.791687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182a00 00:30:40.679 [2024-05-15 02:56:43.791702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.791718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5f380 len:0x10000 key:0x182a00 00:30:40.679 [2024-05-15 02:56:43.791732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.791748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c4f300 len:0x10000 key:0x182a00 00:30:40.679 [2024-05-15 02:56:43.791762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.791779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182a00 00:30:40.679 [2024-05-15 02:56:43.791792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.791808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c2f200 len:0x10000 key:0x182a00 00:30:40.679 [2024-05-15 02:56:43.791822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.791838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c1f180 len:0x10000 key:0x182a00 00:30:40.679 [2024-05-15 02:56:43.791852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.791868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182a00 00:30:40.679 [2024-05-15 02:56:43.791884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.791906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ff0000 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.791920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.791937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.791951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.791968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fcff00 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.791983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.791999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fbfe80 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.792015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.792045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.792075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f8fd00 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.792105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f7fc80 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.792136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.792166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.792197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f4fb00 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.792229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.792259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.792289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.792319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182b00 00:30:40.679 [2024-05-15 02:56:43.792349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0d8000 len:0x10000 key:0x182800 00:30:40.679 [2024-05-15 02:56:43.792380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0f9000 len:0x10000 key:0x182800 00:30:40.679 [2024-05-15 02:56:43.792412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f11a000 len:0x10000 key:0x182800 00:30:40.679 [2024-05-15 02:56:43.792442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f13b000 len:0x10000 key:0x182800 00:30:40.679 [2024-05-15 02:56:43.792472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f15c000 len:0x10000 key:0x182800 00:30:40.679 [2024-05-15 02:56:43.792502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f17d000 len:0x10000 key:0x182800 00:30:40.679 [2024-05-15 02:56:43.792533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x182800 00:30:40.679 [2024-05-15 02:56:43.792563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.792581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x182800 00:30:40.679 [2024-05-15 02:56:43.792595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:40bf440 sqhd:5a90 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.795049] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:30:40.679 [2024-05-15 02:56:43.795106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0dfd80 len:0x10000 key:0x182c00 00:30:40.679 [2024-05-15 02:56:43.795139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.795181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0cfd00 len:0x10000 key:0x182c00 00:30:40.679 [2024-05-15 02:56:43.795214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.795251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0bfc80 len:0x10000 key:0x182c00 00:30:40.679 [2024-05-15 02:56:43.795283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.795320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x182c00 00:30:40.679 [2024-05-15 02:56:43.795351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.795387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x182c00 00:30:40.679 [2024-05-15 02:56:43.795419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.679 [2024-05-15 02:56:43.795456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a08fb00 len:0x10000 key:0x182c00 00:30:40.680 [2024-05-15 02:56:43.795487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.795523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a07fa80 len:0x10000 key:0x182c00 00:30:40.680 [2024-05-15 02:56:43.795555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.795592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a06fa00 len:0x10000 key:0x182c00 00:30:40.680 [2024-05-15 02:56:43.795624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.795661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x182c00 00:30:40.680 [2024-05-15 02:56:43.795693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.795736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x182c00 00:30:40.680 [2024-05-15 02:56:43.795768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.795813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a03f880 len:0x10000 key:0x182c00 00:30:40.680 [2024-05-15 02:56:43.795846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.795883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x182c00 00:30:40.680 [2024-05-15 02:56:43.795926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.795963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x182c00 00:30:40.680 [2024-05-15 02:56:43.795996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x182c00 00:30:40.680 [2024-05-15 02:56:43.796066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182b00 00:30:40.680 [2024-05-15 02:56:43.796134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e7f480 len:0x10000 key:0x182b00 00:30:40.680 [2024-05-15 02:56:43.796202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182b00 00:30:40.680 [2024-05-15 02:56:43.796271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182b00 00:30:40.680 [2024-05-15 02:56:43.796340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e4f300 len:0x10000 key:0x182b00 00:30:40.680 [2024-05-15 02:56:43.796409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182b00 00:30:40.680 [2024-05-15 02:56:43.796477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f200 len:0x10000 key:0x182b00 00:30:40.680 [2024-05-15 02:56:43.796545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x182b00 00:30:40.680 [2024-05-15 02:56:43.796619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182b00 00:30:40.680 [2024-05-15 02:56:43.796688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3f0000 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.796756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.796825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.796907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.796946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3bfe80 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.796977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3afe00 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a38fd00 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a37fc80 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a34fb00 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a31f980 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ff880 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ef800 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.797952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.797984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.798022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2cf700 len:0x10000 key:0x182f00 00:30:40.680 [2024-05-15 02:56:43.798054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.680 [2024-05-15 02:56:43.798094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x182f00 00:30:40.681 [2024-05-15 02:56:43.798126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.798165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x182f00 00:30:40.681 [2024-05-15 02:56:43.798202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.798240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29f580 len:0x10000 key:0x182f00 00:30:40.681 [2024-05-15 02:56:43.798273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.798310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x182f00 00:30:40.681 [2024-05-15 02:56:43.798343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.798380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x182f00 00:30:40.681 [2024-05-15 02:56:43.798412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.798449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a26f400 len:0x10000 key:0x182f00 00:30:40.681 [2024-05-15 02:56:43.798481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.798518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x182f00 00:30:40.681 [2024-05-15 02:56:43.798550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.798586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x182f00 00:30:40.681 [2024-05-15 02:56:43.798619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.798656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x182f00 00:30:40.681 [2024-05-15 02:56:43.798688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.798724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a22f200 len:0x10000 key:0x182f00 00:30:40.681 [2024-05-15 02:56:43.798757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.798794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x182f00 00:30:40.681 [2024-05-15 02:56:43.798825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.798863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x182f00 00:30:40.681 [2024-05-15 02:56:43.798906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.798944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x182d00 00:30:40.681 [2024-05-15 02:56:43.798975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.799017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x182d00 00:30:40.681 [2024-05-15 02:56:43.799049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.799087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x182d00 00:30:40.681 [2024-05-15 02:56:43.799119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.799158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x182d00 00:30:40.681 [2024-05-15 02:56:43.799189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.799227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x182d00 00:30:40.681 [2024-05-15 02:56:43.799258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.799296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a59fd80 len:0x10000 key:0x182d00 00:30:40.681 [2024-05-15 02:56:43.799328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.799365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x182d00 00:30:40.681 [2024-05-15 02:56:43.799397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.799433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x182d00 00:30:40.681 [2024-05-15 02:56:43.799466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.799503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x182d00 00:30:40.681 [2024-05-15 02:56:43.799535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.799572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0efe00 len:0x10000 key:0x182c00 00:30:40.681 [2024-05-15 02:56:43.799604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402ec00 sqhd:8370 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.801636] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:30:40.681 [2024-05-15 02:56:43.801690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a71f980 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.801724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.801766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.801804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.801841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ff880 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.801873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.801945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.801978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.802016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.802047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.802085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.802117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.802154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.802185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.802223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.802255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.802292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f580 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.802324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.802361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.802393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.802430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.802462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.802499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a66f400 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.802530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.802567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a65f380 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.802599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.802643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.802675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.681 [2024-05-15 02:56:43.802712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183900 00:30:40.681 [2024-05-15 02:56:43.802745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.802782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x183900 00:30:40.682 [2024-05-15 02:56:43.802813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.802850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183900 00:30:40.682 [2024-05-15 02:56:43.802882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.802929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183900 00:30:40.682 [2024-05-15 02:56:43.802961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.802998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.803944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.803980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183800 00:30:40.682 [2024-05-15 02:56:43.804847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a45f980 len:0x10000 key:0x182d00 00:30:40.682 [2024-05-15 02:56:43.804926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.804964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd44000 len:0x10000 key:0x182800 00:30:40.682 [2024-05-15 02:56:43.805000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.805038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd65000 len:0x10000 key:0x182800 00:30:40.682 [2024-05-15 02:56:43.805070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.805107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd86000 len:0x10000 key:0x182800 00:30:40.682 [2024-05-15 02:56:43.805139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.805176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cda7000 len:0x10000 key:0x182800 00:30:40.682 [2024-05-15 02:56:43.805208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.805245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cdc8000 len:0x10000 key:0x182800 00:30:40.682 [2024-05-15 02:56:43.805276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.682 [2024-05-15 02:56:43.805314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cde9000 len:0x10000 key:0x182800 00:30:40.682 [2024-05-15 02:56:43.805345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.805382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce0a000 len:0x10000 key:0x182800 00:30:40.683 [2024-05-15 02:56:43.805414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.805451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce2b000 len:0x10000 key:0x182800 00:30:40.683 [2024-05-15 02:56:43.805494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.805523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce4c000 len:0x10000 key:0x182800 00:30:40.683 [2024-05-15 02:56:43.805547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.805576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce6d000 len:0x10000 key:0x182800 00:30:40.683 [2024-05-15 02:56:43.805601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.805630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x182800 00:30:40.683 [2024-05-15 02:56:43.805654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.805683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x182800 00:30:40.683 [2024-05-15 02:56:43.805708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.805744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b62000 len:0x10000 key:0x182800 00:30:40.683 [2024-05-15 02:56:43.805769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.805798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b83000 len:0x10000 key:0x182800 00:30:40.683 [2024-05-15 02:56:43.805822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.805851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001142a000 len:0x10000 key:0x182800 00:30:40.683 [2024-05-15 02:56:43.805876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.805912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011409000 len:0x10000 key:0x182800 00:30:40.683 [2024-05-15 02:56:43.805937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.805966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c567000 len:0x10000 key:0x182800 00:30:40.683 [2024-05-15 02:56:43.805991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.806019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c546000 len:0x10000 key:0x182800 00:30:40.683 [2024-05-15 02:56:43.806044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e840 sqhd:ac50 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.808596] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:30:40.683 [2024-05-15 02:56:43.808662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183800 00:30:40.683 [2024-05-15 02:56:43.808690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.808724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183800 00:30:40.683 [2024-05-15 02:56:43.808749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.808779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adf0000 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.808804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.808834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.808859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.808888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.808923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.808958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.808984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.683 [2024-05-15 02:56:43.809727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x182e00 00:30:40.683 [2024-05-15 02:56:43.809752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.809781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.809807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.809836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.809861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.809891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.809945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.809974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.810000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.810054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.810109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.810163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.810222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.810277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.810331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.810386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.810441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x182e00 00:30:40.684 [2024-05-15 02:56:43.810495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.810550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.810605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.810660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.810714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.810768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.810827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.810881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.810946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.810975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.811000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.811055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.811110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.811165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.811219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183700 00:30:40.684 [2024-05-15 02:56:43.811274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183100 00:30:40.684 [2024-05-15 02:56:43.811329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012150000 len:0x10000 key:0x182800 00:30:40.684 [2024-05-15 02:56:43.811383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012171000 len:0x10000 key:0x182800 00:30:40.684 [2024-05-15 02:56:43.811438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012192000 len:0x10000 key:0x182800 00:30:40.684 [2024-05-15 02:56:43.811496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121b3000 len:0x10000 key:0x182800 00:30:40.684 [2024-05-15 02:56:43.811551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121d4000 len:0x10000 key:0x182800 00:30:40.684 [2024-05-15 02:56:43.811606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121f5000 len:0x10000 key:0x182800 00:30:40.684 [2024-05-15 02:56:43.811661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012216000 len:0x10000 key:0x182800 00:30:40.684 [2024-05-15 02:56:43.811715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012237000 len:0x10000 key:0x182800 00:30:40.684 [2024-05-15 02:56:43.811770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.684 [2024-05-15 02:56:43.811800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012258000 len:0x10000 key:0x182800 00:30:40.684 [2024-05-15 02:56:43.811825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.811855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012279000 len:0x10000 key:0x182800 00:30:40.685 [2024-05-15 02:56:43.811880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.811918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001229a000 len:0x10000 key:0x182800 00:30:40.685 [2024-05-15 02:56:43.811943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.811972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122bb000 len:0x10000 key:0x182800 00:30:40.685 [2024-05-15 02:56:43.811998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.812027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122dc000 len:0x10000 key:0x182800 00:30:40.685 [2024-05-15 02:56:43.812053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.812086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122fd000 len:0x10000 key:0x182800 00:30:40.685 [2024-05-15 02:56:43.812112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.812142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001231e000 len:0x10000 key:0x182800 00:30:40.685 [2024-05-15 02:56:43.812167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.812196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001233f000 len:0x10000 key:0x182800 00:30:40.685 [2024-05-15 02:56:43.812221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:402e540 sqhd:d530 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.814732] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:30:40.685 [2024-05-15 02:56:43.814788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.814820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.814863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.814907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.814946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.814977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.815931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.815963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.685 [2024-05-15 02:56:43.816838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183e00 00:30:40.685 [2024-05-15 02:56:43.816870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.816919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183e00 00:30:40.686 [2024-05-15 02:56:43.816956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.816993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.817947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.817980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.818933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.818966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.819004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.819038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.819076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.819107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.819144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183c00 00:30:40.686 [2024-05-15 02:56:43.819177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.819214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183200 00:30:40.686 [2024-05-15 02:56:43.819246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.819283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183700 00:30:40.686 [2024-05-15 02:56:43.819315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:408f440 sqhd:fe10 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.822068] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:30:40.686 [2024-05-15 02:56:43.822311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.686 [2024-05-15 02:56:43.822353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:7321 cdw0:7331a850 sqhd:3440 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.822388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.686 [2024-05-15 02:56:43.822419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:7321 cdw0:7331a850 sqhd:3440 p:0 m:0 dnr:0 00:30:40.686 [2024-05-15 02:56:43.822452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.686 [2024-05-15 02:56:43.822483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:7321 cdw0:7331a850 sqhd:3440 p:0 m:0 dnr:0 00:30:40.687 [2024-05-15 02:56:43.822516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.822548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:7321 cdw0:7331a850 sqhd:3440 p:0 m:0 dnr:0 00:30:40.687 [2024-05-15 02:56:43.824512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:40.687 [2024-05-15 02:56:43.824554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:40.687 [2024-05-15 02:56:43.824585] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.687 [2024-05-15 02:56:43.824630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.824664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.824696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.824727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.824759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.824791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.824823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.824854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.827180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:40.687 [2024-05-15 02:56:43.827222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:40.687 [2024-05-15 02:56:43.827250] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.687 [2024-05-15 02:56:43.827293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.827327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.827359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.827398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.827431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.827462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.827495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.827525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.829767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:40.687 [2024-05-15 02:56:43.829807] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:40.687 [2024-05-15 02:56:43.829836] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.687 [2024-05-15 02:56:43.829881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.829925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.829959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.829989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.830022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.830053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.830086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.830117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.832237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:40.687 [2024-05-15 02:56:43.832277] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:40.687 [2024-05-15 02:56:43.832316] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.687 [2024-05-15 02:56:43.832359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.832389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.832419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.832447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.832477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.832506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.832536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.832570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.834664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:40.687 [2024-05-15 02:56:43.834704] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:40.687 [2024-05-15 02:56:43.834733] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.687 [2024-05-15 02:56:43.834789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.834820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.834850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.834878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.834956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.834986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.835015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.687 [2024-05-15 02:56:43.835044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:31877 cdw0:7331a850 sqhd:fc00 p:0 m:1 dnr:0 00:30:40.687 [2024-05-15 02:56:43.865314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:40.687 [2024-05-15 02:56:43.865365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:40.687 [2024-05-15 02:56:43.865397] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.687 [2024-05-15 02:56:43.882819] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:40.687 [2024-05-15 02:56:43.882858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:40.687 [2024-05-15 02:56:43.882879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:40.687 [2024-05-15 02:56:43.882976] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.687 [2024-05-15 02:56:43.883004] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.687 [2024-05-15 02:56:43.883026] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:40.687 [2024-05-15 02:56:43.883195] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:40.687 [2024-05-15 02:56:43.883217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:40.687 [2024-05-15 02:56:43.883236] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:40.687 task offset: 16384 on job bdev=Nvme10n1 fails 00:30:40.687 00:30:40.687 Latency(us) 00:30:40.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.687 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:40.687 Job: Nvme1n1 ended in about 2.21 seconds with error 00:30:40.687 Verification LBA range: start 0x0 length 0x400 00:30:40.687 Nvme1n1 : 2.21 94.02 5.88 28.93 0.00 516908.91 35560.40 1072282.94 00:30:40.687 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:40.687 Job: Nvme2n1 ended in about 2.21 seconds with error 00:30:40.688 Verification LBA range: start 0x0 length 0x400 00:30:40.688 Nvme2n1 : 2.21 90.76 5.67 28.90 0.00 525332.71 40803.28 1079577.38 00:30:40.688 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:40.688 Job: Nvme3n1 ended in about 2.22 seconds with error 00:30:40.688 Verification LBA range: start 0x0 length 0x400 00:30:40.688 Nvme3n1 : 2.22 115.49 7.22 28.87 0.00 430767.15 6211.67 1079577.38 00:30:40.688 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:40.688 Job: Nvme4n1 ended in about 2.22 seconds with error 00:30:40.688 Verification LBA range: start 0x0 length 0x400 00:30:40.688 Nvme4n1 : 2.22 115.37 7.21 28.84 0.00 426970.07 28949.82 1225466.21 00:30:40.688 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:40.688 Job: Nvme5n1 ended in about 2.22 seconds with error 00:30:40.688 Verification LBA range: start 0x0 length 0x400 00:30:40.688 Nvme5n1 : 2.22 111.65 6.98 28.81 0.00 433760.28 49693.38 1203582.89 00:30:40.688 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:40.688 Job: Nvme6n1 ended in about 2.22 seconds with error 00:30:40.688 Verification LBA range: start 0x0 length 0x400 00:30:40.688 Nvme6n1 : 2.22 115.15 7.20 28.79 0.00 418851.26 69297.20 1181699.56 00:30:40.688 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:40.688 Job: Nvme7n1 ended in about 2.22 seconds with error 00:30:40.688 Verification LBA range: start 0x0 length 0x400 00:30:40.688 Nvme7n1 : 2.22 115.08 7.19 28.77 0.00 414535.50 16184.54 1159816.24 00:30:40.688 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:40.688 Job: Nvme8n1 ended in about 2.23 seconds with error 00:30:40.688 Verification LBA range: start 0x0 length 0x400 00:30:40.688 Nvme8n1 : 2.23 115.00 7.19 28.75 0.00 410446.27 22795.13 1145227.35 00:30:40.688 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:40.688 Job: Nvme9n1 ended in about 2.23 seconds with error 00:30:40.688 Verification LBA range: start 0x0 length 0x400 00:30:40.688 Nvme9n1 : 2.23 114.93 7.18 28.73 0.00 406141.55 67929.49 1130638.47 00:30:40.688 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:40.688 Job: Nvme10n1 ended in about 2.13 seconds with error 00:30:40.688 Verification LBA range: start 0x0 length 0x400 00:30:40.688 Nvme10n1 : 2.13 60.17 3.76 30.08 0.00 634466.84 72032.61 1137932.91 00:30:40.688 =================================================================================================================== 00:30:40.688 Total : 1047.62 65.48 289.48 0.00 452357.65 6211.67 1225466.21 00:30:40.688 [2024-05-15 02:56:43.913259] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:40.688 [2024-05-15 02:56:43.918773] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:40.688 [2024-05-15 02:56:43.918809] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:40.688 [2024-05-15 02:56:43.918823] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:30:40.688 [2024-05-15 02:56:43.918914] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:40.688 [2024-05-15 02:56:43.918932] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:40.688 [2024-05-15 02:56:43.918943] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:30:40.688 [2024-05-15 02:56:43.927191] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:40.688 [2024-05-15 02:56:43.927258] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:40.688 [2024-05-15 02:56:43.927295] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:30:40.688 [2024-05-15 02:56:43.927442] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:40.688 [2024-05-15 02:56:43.927479] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:40.688 [2024-05-15 02:56:43.927504] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:30:40.688 [2024-05-15 02:56:43.927630] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:40.688 [2024-05-15 02:56:43.927664] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:40.688 [2024-05-15 02:56:43.927689] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:30:40.688 [2024-05-15 02:56:43.928503] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:40.688 [2024-05-15 02:56:43.928545] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:40.688 [2024-05-15 02:56:43.928570] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:30:40.688 [2024-05-15 02:56:43.928683] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:40.688 [2024-05-15 02:56:43.928718] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:40.688 [2024-05-15 02:56:43.928743] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:30:40.688 [2024-05-15 02:56:43.928864] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:30:40.688 [2024-05-15 02:56:43.928912] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:30:40.688 [2024-05-15 02:56:43.928938] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 933485 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:40.947 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:40.947 rmmod nvme_rdma 00:30:40.947 rmmod nvme_fabrics 00:30:41.207 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 933485 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:30:41.207 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:41.207 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:30:41.207 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:30:41.207 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:41.207 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:41.207 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:41.207 00:30:41.207 real 0m5.100s 00:30:41.207 user 0m16.977s 00:30:41.207 sys 0m1.462s 00:30:41.207 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:41.207 02:56:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:41.207 ************************************ 00:30:41.207 END TEST nvmf_shutdown_tc3 00:30:41.207 ************************************ 00:30:41.207 02:56:44 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:30:41.207 00:30:41.207 real 0m23.925s 00:30:41.207 user 1m8.684s 00:30:41.207 sys 0m9.169s 00:30:41.207 02:56:44 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:41.207 02:56:44 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:41.207 ************************************ 00:30:41.207 END TEST nvmf_shutdown 00:30:41.207 ************************************ 00:30:41.207 02:56:44 nvmf_rdma -- nvmf/nvmf.sh@85 -- # timing_exit target 00:30:41.207 02:56:44 nvmf_rdma -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:41.207 02:56:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:41.207 02:56:44 nvmf_rdma -- nvmf/nvmf.sh@87 -- # timing_enter host 00:30:41.207 02:56:44 nvmf_rdma -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:41.207 02:56:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:41.207 02:56:44 nvmf_rdma -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:30:41.207 02:56:44 nvmf_rdma -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:30:41.207 02:56:44 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:41.207 02:56:44 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:41.207 02:56:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:41.207 ************************************ 00:30:41.207 START TEST nvmf_multicontroller 00:30:41.207 ************************************ 00:30:41.207 02:56:44 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:30:41.466 * Looking for test storage... 00:30:41.466 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.466 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:30:41.467 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:30:41.467 00:30:41.467 real 0m0.138s 00:30:41.467 user 0m0.057s 00:30:41.467 sys 0m0.092s 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:41.467 02:56:44 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:41.467 ************************************ 00:30:41.467 END TEST nvmf_multicontroller 00:30:41.467 ************************************ 00:30:41.467 02:56:44 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:30:41.467 02:56:44 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:41.467 02:56:44 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:41.467 02:56:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:41.467 ************************************ 00:30:41.467 START TEST nvmf_aer 00:30:41.467 ************************************ 00:30:41.467 02:56:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:30:41.726 * Looking for test storage... 00:30:41.726 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.726 02:56:44 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:30:41.727 02:56:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:48.296 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:48.296 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:48.297 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:48.297 Found net devices under 0000:18:00.0: mlx_0_0 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:48.297 Found net devices under 0000:18:00.1: mlx_0_1 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:48.297 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:48.297 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:30:48.297 altname enp24s0f0np0 00:30:48.297 altname ens785f0np0 00:30:48.297 inet 192.168.100.8/24 scope global mlx_0_0 00:30:48.297 valid_lft forever preferred_lft forever 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:48.297 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:48.297 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:30:48.297 altname enp24s0f1np1 00:30:48.297 altname ens785f1np1 00:30:48.297 inet 192.168.100.9/24 scope global mlx_0_1 00:30:48.297 valid_lft forever preferred_lft forever 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:48.297 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:48.298 02:56:50 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:48.298 192.168.100.9' 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:48.298 192.168.100.9' 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:48.298 192.168.100.9' 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=936896 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 936896 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@828 -- # '[' -z 936896 ']' 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.298 [2024-05-15 02:56:51.123820] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:48.298 [2024-05-15 02:56:51.123892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.298 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.298 [2024-05-15 02:56:51.231180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:48.298 [2024-05-15 02:56:51.283216] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.298 [2024-05-15 02:56:51.283268] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.298 [2024-05-15 02:56:51.283283] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.298 [2024-05-15 02:56:51.283301] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.298 [2024-05-15 02:56:51.283313] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.298 [2024-05-15 02:56:51.283373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.298 [2024-05-15 02:56:51.283462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.298 [2024-05-15 02:56:51.283570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.298 [2024-05-15 02:56:51.283570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@861 -- # return 0 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.298 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.298 [2024-05-15 02:56:51.488704] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18e9d70/0x18ee260) succeed. 00:30:48.298 [2024-05-15 02:56:51.503823] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18eb3b0/0x192f8f0) succeed. 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.559 Malloc0 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.559 [2024-05-15 02:56:51.707122] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:48.559 [2024-05-15 02:56:51.707558] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.559 [ 00:30:48.559 { 00:30:48.559 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:48.559 "subtype": "Discovery", 00:30:48.559 "listen_addresses": [], 00:30:48.559 "allow_any_host": true, 00:30:48.559 "hosts": [] 00:30:48.559 }, 00:30:48.559 { 00:30:48.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:48.559 "subtype": "NVMe", 00:30:48.559 "listen_addresses": [ 00:30:48.559 { 00:30:48.559 "trtype": "RDMA", 00:30:48.559 "adrfam": "IPv4", 00:30:48.559 "traddr": "192.168.100.8", 00:30:48.559 "trsvcid": "4420" 00:30:48.559 } 00:30:48.559 ], 00:30:48.559 "allow_any_host": true, 00:30:48.559 "hosts": [], 00:30:48.559 "serial_number": "SPDK00000000000001", 00:30:48.559 "model_number": "SPDK bdev Controller", 00:30:48.559 "max_namespaces": 2, 00:30:48.559 "min_cntlid": 1, 00:30:48.559 "max_cntlid": 65519, 00:30:48.559 "namespaces": [ 00:30:48.559 { 00:30:48.559 "nsid": 1, 00:30:48.559 "bdev_name": "Malloc0", 00:30:48.559 "name": "Malloc0", 00:30:48.559 "nguid": "702C2952D5034EABA769AB5D4E098525", 00:30:48.559 "uuid": "702c2952-d503-4eab-a769-ab5d4e098525" 00:30:48.559 } 00:30:48.559 ] 00:30:48.559 } 00:30:48.559 ] 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=937054 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # local i=0 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 0 -lt 200 ']' 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # i=1 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:30:48.559 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 1 -lt 200 ']' 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # i=2 00:30:48.559 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:30:48.818 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:48.818 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 2 -lt 200 ']' 00:30:48.818 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # i=3 00:30:48.818 02:56:51 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:30:48.818 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:48.818 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:48.818 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1273 -- # return 0 00:30:48.818 02:56:52 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:48.818 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.818 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:48.818 Malloc1 00:30:48.818 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.818 02:56:52 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:48.818 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.818 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:49.078 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:49.079 [ 00:30:49.079 { 00:30:49.079 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:49.079 "subtype": "Discovery", 00:30:49.079 "listen_addresses": [], 00:30:49.079 "allow_any_host": true, 00:30:49.079 "hosts": [] 00:30:49.079 }, 00:30:49.079 { 00:30:49.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:49.079 "subtype": "NVMe", 00:30:49.079 "listen_addresses": [ 00:30:49.079 { 00:30:49.079 "trtype": "RDMA", 00:30:49.079 "adrfam": "IPv4", 00:30:49.079 "traddr": "192.168.100.8", 00:30:49.079 "trsvcid": "4420" 00:30:49.079 } 00:30:49.079 ], 00:30:49.079 "allow_any_host": true, 00:30:49.079 "hosts": [], 00:30:49.079 "serial_number": "SPDK00000000000001", 00:30:49.079 "model_number": "SPDK bdev Controller", 00:30:49.079 "max_namespaces": 2, 00:30:49.079 "min_cntlid": 1, 00:30:49.079 "max_cntlid": 65519, 00:30:49.079 "namespaces": [ 00:30:49.079 { 00:30:49.079 "nsid": 1, 00:30:49.079 "bdev_name": "Malloc0", 00:30:49.079 "name": "Malloc0", 00:30:49.079 "nguid": "702C2952D5034EABA769AB5D4E098525", 00:30:49.079 "uuid": "702c2952-d503-4eab-a769-ab5d4e098525" 00:30:49.079 }, 00:30:49.079 { 00:30:49.079 "nsid": 2, 00:30:49.079 "bdev_name": "Malloc1", 00:30:49.079 "name": "Malloc1", 00:30:49.079 "nguid": "C98F49F9A76442E38CB202E14B7F4785", 00:30:49.079 "uuid": "c98f49f9-a764-42e3-8cb2-02e14b7f4785" 00:30:49.079 } 00:30:49.079 ] 00:30:49.079 } 00:30:49.079 ] 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 937054 00:30:49.079 Asynchronous Event Request test 00:30:49.079 Attaching to 192.168.100.8 00:30:49.079 Attached to 192.168.100.8 00:30:49.079 Registering asynchronous event callbacks... 00:30:49.079 Starting namespace attribute notice tests for all controllers... 00:30:49.079 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:49.079 aer_cb - Changed Namespace 00:30:49.079 Cleaning up... 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:49.079 rmmod nvme_rdma 00:30:49.079 rmmod nvme_fabrics 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 936896 ']' 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 936896 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@947 -- # '[' -z 936896 ']' 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@951 -- # kill -0 936896 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # uname 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 936896 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@965 -- # echo 'killing process with pid 936896' 00:30:49.079 killing process with pid 936896 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@966 -- # kill 936896 00:30:49.079 [2024-05-15 02:56:52.340308] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:49.079 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@971 -- # wait 936896 00:30:49.338 [2024-05-15 02:56:52.439579] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:30:49.599 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:49.599 02:56:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:49.599 00:30:49.599 real 0m7.950s 00:30:49.599 user 0m6.833s 00:30:49.599 sys 0m5.432s 00:30:49.599 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:49.599 02:56:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:49.599 ************************************ 00:30:49.599 END TEST nvmf_aer 00:30:49.599 ************************************ 00:30:49.599 02:56:52 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:30:49.599 02:56:52 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:49.599 02:56:52 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:49.599 02:56:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:49.599 ************************************ 00:30:49.599 START TEST nvmf_async_init 00:30:49.599 ************************************ 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:30:49.599 * Looking for test storage... 00:30:49.599 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f9f5062e72a84859880bf40284011034 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:30:49.599 02:56:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.171 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.171 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:30:56.171 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:56.171 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:56.171 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:30:56.172 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:30:56.172 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:30:56.172 Found net devices under 0000:18:00.0: mlx_0_0 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:30:56.172 Found net devices under 0000:18:00.1: mlx_0_1 00:30:56.172 02:56:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:56.172 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:56.172 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:30:56.172 altname enp24s0f0np0 00:30:56.172 altname ens785f0np0 00:30:56.172 inet 192.168.100.8/24 scope global mlx_0_0 00:30:56.172 valid_lft forever preferred_lft forever 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:56.172 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:56.172 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:30:56.172 altname enp24s0f1np1 00:30:56.172 altname ens785f1np1 00:30:56.172 inet 192.168.100.9/24 scope global mlx_0_1 00:30:56.172 valid_lft forever preferred_lft forever 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:56.172 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:56.173 192.168.100.9' 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:56.173 192.168.100.9' 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:56.173 192.168.100.9' 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=939956 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 939956 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@828 -- # '[' -z 939956 ']' 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:56.173 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.173 [2024-05-15 02:56:59.294165] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:56.173 [2024-05-15 02:56:59.294228] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.173 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.173 [2024-05-15 02:56:59.391481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.173 [2024-05-15 02:56:59.442139] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.173 [2024-05-15 02:56:59.442188] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.173 [2024-05-15 02:56:59.442203] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.173 [2024-05-15 02:56:59.442216] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.173 [2024-05-15 02:56:59.442227] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.173 [2024-05-15 02:56:59.442261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@861 -- # return 0 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.433 [2024-05-15 02:56:59.616049] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b67c00/0x1b6c0f0) succeed. 00:30:56.433 [2024-05-15 02:56:59.629942] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b69100/0x1bad780) succeed. 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.433 null0 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f9f5062e72a84859880bf40284011034 00:30:56.433 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.692 [2024-05-15 02:56:59.733071] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:56.692 [2024-05-15 02:56:59.733433] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.692 nvme0n1 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.692 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.692 [ 00:30:56.692 { 00:30:56.692 "name": "nvme0n1", 00:30:56.692 "aliases": [ 00:30:56.692 "f9f5062e-72a8-4859-880b-f40284011034" 00:30:56.692 ], 00:30:56.692 "product_name": "NVMe disk", 00:30:56.692 "block_size": 512, 00:30:56.692 "num_blocks": 2097152, 00:30:56.692 "uuid": "f9f5062e-72a8-4859-880b-f40284011034", 00:30:56.692 "assigned_rate_limits": { 00:30:56.692 "rw_ios_per_sec": 0, 00:30:56.692 "rw_mbytes_per_sec": 0, 00:30:56.692 "r_mbytes_per_sec": 0, 00:30:56.692 "w_mbytes_per_sec": 0 00:30:56.692 }, 00:30:56.692 "claimed": false, 00:30:56.692 "zoned": false, 00:30:56.692 "supported_io_types": { 00:30:56.692 "read": true, 00:30:56.692 "write": true, 00:30:56.692 "unmap": false, 00:30:56.692 "write_zeroes": true, 00:30:56.692 "flush": true, 00:30:56.692 "reset": true, 00:30:56.692 "compare": true, 00:30:56.692 "compare_and_write": true, 00:30:56.692 "abort": true, 00:30:56.692 "nvme_admin": true, 00:30:56.692 "nvme_io": true 00:30:56.692 }, 00:30:56.692 "memory_domains": [ 00:30:56.692 { 00:30:56.692 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:30:56.692 "dma_device_type": 0 00:30:56.692 } 00:30:56.692 ], 00:30:56.692 "driver_specific": { 00:30:56.692 "nvme": [ 00:30:56.692 { 00:30:56.692 "trid": { 00:30:56.692 "trtype": "RDMA", 00:30:56.692 "adrfam": "IPv4", 00:30:56.692 "traddr": "192.168.100.8", 00:30:56.692 "trsvcid": "4420", 00:30:56.692 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:56.692 }, 00:30:56.692 "ctrlr_data": { 00:30:56.693 "cntlid": 1, 00:30:56.693 "vendor_id": "0x8086", 00:30:56.693 "model_number": "SPDK bdev Controller", 00:30:56.693 "serial_number": "00000000000000000000", 00:30:56.693 "firmware_revision": "24.05", 00:30:56.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:56.693 "oacs": { 00:30:56.693 "security": 0, 00:30:56.693 "format": 0, 00:30:56.693 "firmware": 0, 00:30:56.693 "ns_manage": 0 00:30:56.693 }, 00:30:56.693 "multi_ctrlr": true, 00:30:56.693 "ana_reporting": false 00:30:56.693 }, 00:30:56.693 "vs": { 00:30:56.693 "nvme_version": "1.3" 00:30:56.693 }, 00:30:56.693 "ns_data": { 00:30:56.693 "id": 1, 00:30:56.693 "can_share": true 00:30:56.693 } 00:30:56.693 } 00:30:56.693 ], 00:30:56.693 "mp_policy": "active_passive" 00:30:56.693 } 00:30:56.693 } 00:30:56.693 ] 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.693 [2024-05-15 02:56:59.860777] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:56.693 [2024-05-15 02:56:59.883891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:56.693 [2024-05-15 02:56:59.916281] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.693 [ 00:30:56.693 { 00:30:56.693 "name": "nvme0n1", 00:30:56.693 "aliases": [ 00:30:56.693 "f9f5062e-72a8-4859-880b-f40284011034" 00:30:56.693 ], 00:30:56.693 "product_name": "NVMe disk", 00:30:56.693 "block_size": 512, 00:30:56.693 "num_blocks": 2097152, 00:30:56.693 "uuid": "f9f5062e-72a8-4859-880b-f40284011034", 00:30:56.693 "assigned_rate_limits": { 00:30:56.693 "rw_ios_per_sec": 0, 00:30:56.693 "rw_mbytes_per_sec": 0, 00:30:56.693 "r_mbytes_per_sec": 0, 00:30:56.693 "w_mbytes_per_sec": 0 00:30:56.693 }, 00:30:56.693 "claimed": false, 00:30:56.693 "zoned": false, 00:30:56.693 "supported_io_types": { 00:30:56.693 "read": true, 00:30:56.693 "write": true, 00:30:56.693 "unmap": false, 00:30:56.693 "write_zeroes": true, 00:30:56.693 "flush": true, 00:30:56.693 "reset": true, 00:30:56.693 "compare": true, 00:30:56.693 "compare_and_write": true, 00:30:56.693 "abort": true, 00:30:56.693 "nvme_admin": true, 00:30:56.693 "nvme_io": true 00:30:56.693 }, 00:30:56.693 "memory_domains": [ 00:30:56.693 { 00:30:56.693 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:30:56.693 "dma_device_type": 0 00:30:56.693 } 00:30:56.693 ], 00:30:56.693 "driver_specific": { 00:30:56.693 "nvme": [ 00:30:56.693 { 00:30:56.693 "trid": { 00:30:56.693 "trtype": "RDMA", 00:30:56.693 "adrfam": "IPv4", 00:30:56.693 "traddr": "192.168.100.8", 00:30:56.693 "trsvcid": "4420", 00:30:56.693 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:56.693 }, 00:30:56.693 "ctrlr_data": { 00:30:56.693 "cntlid": 2, 00:30:56.693 "vendor_id": "0x8086", 00:30:56.693 "model_number": "SPDK bdev Controller", 00:30:56.693 "serial_number": "00000000000000000000", 00:30:56.693 "firmware_revision": "24.05", 00:30:56.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:56.693 "oacs": { 00:30:56.693 "security": 0, 00:30:56.693 "format": 0, 00:30:56.693 "firmware": 0, 00:30:56.693 "ns_manage": 0 00:30:56.693 }, 00:30:56.693 "multi_ctrlr": true, 00:30:56.693 "ana_reporting": false 00:30:56.693 }, 00:30:56.693 "vs": { 00:30:56.693 "nvme_version": "1.3" 00:30:56.693 }, 00:30:56.693 "ns_data": { 00:30:56.693 "id": 1, 00:30:56.693 "can_share": true 00:30:56.693 } 00:30:56.693 } 00:30:56.693 ], 00:30:56.693 "mp_policy": "active_passive" 00:30:56.693 } 00:30:56.693 } 00:30:56.693 ] 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.693 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.951 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:56.951 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.f3olmrxPac 00:30:56.951 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:56.951 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.f3olmrxPac 00:30:56.951 02:56:59 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:56.951 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.951 02:56:59 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.951 [2024-05-15 02:57:00.010295] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.f3olmrxPac 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.f3olmrxPac 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.951 [2024-05-15 02:57:00.030318] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:56.951 nvme0n1 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.951 [ 00:30:56.951 { 00:30:56.951 "name": "nvme0n1", 00:30:56.951 "aliases": [ 00:30:56.951 "f9f5062e-72a8-4859-880b-f40284011034" 00:30:56.951 ], 00:30:56.951 "product_name": "NVMe disk", 00:30:56.951 "block_size": 512, 00:30:56.951 "num_blocks": 2097152, 00:30:56.951 "uuid": "f9f5062e-72a8-4859-880b-f40284011034", 00:30:56.951 "assigned_rate_limits": { 00:30:56.951 "rw_ios_per_sec": 0, 00:30:56.951 "rw_mbytes_per_sec": 0, 00:30:56.951 "r_mbytes_per_sec": 0, 00:30:56.951 "w_mbytes_per_sec": 0 00:30:56.951 }, 00:30:56.951 "claimed": false, 00:30:56.951 "zoned": false, 00:30:56.951 "supported_io_types": { 00:30:56.951 "read": true, 00:30:56.951 "write": true, 00:30:56.951 "unmap": false, 00:30:56.951 "write_zeroes": true, 00:30:56.951 "flush": true, 00:30:56.951 "reset": true, 00:30:56.951 "compare": true, 00:30:56.951 "compare_and_write": true, 00:30:56.951 "abort": true, 00:30:56.951 "nvme_admin": true, 00:30:56.951 "nvme_io": true 00:30:56.951 }, 00:30:56.951 "memory_domains": [ 00:30:56.951 { 00:30:56.951 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:30:56.951 "dma_device_type": 0 00:30:56.951 } 00:30:56.951 ], 00:30:56.951 "driver_specific": { 00:30:56.951 "nvme": [ 00:30:56.951 { 00:30:56.951 "trid": { 00:30:56.951 "trtype": "RDMA", 00:30:56.951 "adrfam": "IPv4", 00:30:56.951 "traddr": "192.168.100.8", 00:30:56.951 "trsvcid": "4421", 00:30:56.951 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:56.951 }, 00:30:56.951 "ctrlr_data": { 00:30:56.951 "cntlid": 3, 00:30:56.951 "vendor_id": "0x8086", 00:30:56.951 "model_number": "SPDK bdev Controller", 00:30:56.951 "serial_number": "00000000000000000000", 00:30:56.951 "firmware_revision": "24.05", 00:30:56.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:56.951 "oacs": { 00:30:56.951 "security": 0, 00:30:56.951 "format": 0, 00:30:56.951 "firmware": 0, 00:30:56.951 "ns_manage": 0 00:30:56.951 }, 00:30:56.951 "multi_ctrlr": true, 00:30:56.951 "ana_reporting": false 00:30:56.951 }, 00:30:56.951 "vs": { 00:30:56.951 "nvme_version": "1.3" 00:30:56.951 }, 00:30:56.951 "ns_data": { 00:30:56.951 "id": 1, 00:30:56.951 "can_share": true 00:30:56.951 } 00:30:56.951 } 00:30:56.951 ], 00:30:56.951 "mp_policy": "active_passive" 00:30:56.951 } 00:30:56.951 } 00:30:56.951 ] 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.f3olmrxPac 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:56.951 rmmod nvme_rdma 00:30:56.951 rmmod nvme_fabrics 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 939956 ']' 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 939956 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' -z 939956 ']' 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@951 -- # kill -0 939956 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # uname 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:56.951 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 939956 00:30:57.210 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:57.210 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:57.210 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 939956' 00:30:57.210 killing process with pid 939956 00:30:57.210 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@966 -- # kill 939956 00:30:57.210 [2024-05-15 02:57:00.266394] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:57.210 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@971 -- # wait 939956 00:30:57.210 [2024-05-15 02:57:00.319696] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:30:57.469 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:57.469 02:57:00 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:57.469 00:30:57.469 real 0m7.793s 00:30:57.469 user 0m3.081s 00:30:57.469 sys 0m5.346s 00:30:57.469 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:57.469 02:57:00 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:57.469 ************************************ 00:30:57.469 END TEST nvmf_async_init 00:30:57.469 ************************************ 00:30:57.469 02:57:00 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:30:57.469 02:57:00 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:57.469 02:57:00 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:57.469 02:57:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:57.469 ************************************ 00:30:57.469 START TEST dma 00:30:57.469 ************************************ 00:30:57.469 02:57:00 nvmf_rdma.dma -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:30:57.469 * Looking for test storage... 00:30:57.469 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:57.469 02:57:00 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:57.469 02:57:00 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.469 02:57:00 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.469 02:57:00 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.469 02:57:00 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.469 02:57:00 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.469 02:57:00 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.469 02:57:00 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:30:57.469 02:57:00 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:57.469 02:57:00 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:30:57.469 02:57:00 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:30:57.469 02:57:00 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:30:57.469 02:57:00 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:30:57.469 02:57:00 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.469 02:57:00 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:57.469 02:57:00 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:57.469 02:57:00 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:30:57.469 02:57:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:31:04.153 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:31:04.154 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:31:04.154 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:31:04.154 Found net devices under 0000:18:00.0: mlx_0_0 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:31:04.154 Found net devices under 0000:18:00.1: mlx_0_1 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:31:04.154 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:04.154 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:31:04.154 altname enp24s0f0np0 00:31:04.154 altname ens785f0np0 00:31:04.154 inet 192.168.100.8/24 scope global mlx_0_0 00:31:04.154 valid_lft forever preferred_lft forever 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:31:04.154 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:04.154 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:31:04.154 altname enp24s0f1np1 00:31:04.154 altname ens785f1np1 00:31:04.154 inet 192.168.100.9/24 scope global mlx_0_1 00:31:04.154 valid_lft forever preferred_lft forever 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:04.154 02:57:06 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:31:04.154 192.168.100.9' 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:31:04.154 192.168.100.9' 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:31:04.154 192.168.100.9' 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:31:04.154 02:57:07 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:04.154 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:04.154 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=943549 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:04.154 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 943549 00:31:04.154 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@828 -- # '[' -z 943549 ']' 00:31:04.154 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.154 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:04.154 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.154 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:04.155 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:31:04.155 [2024-05-15 02:57:07.121248] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:04.155 [2024-05-15 02:57:07.121311] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.155 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.155 [2024-05-15 02:57:07.218518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:04.155 [2024-05-15 02:57:07.270697] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.155 [2024-05-15 02:57:07.270750] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.155 [2024-05-15 02:57:07.270765] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.155 [2024-05-15 02:57:07.270778] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.155 [2024-05-15 02:57:07.270788] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.155 [2024-05-15 02:57:07.270907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.155 [2024-05-15 02:57:07.270908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.155 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:04.155 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@861 -- # return 0 00:31:04.155 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:04.155 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:04.155 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:31:04.155 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.155 02:57:07 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:31:04.155 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:04.155 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:31:04.414 [2024-05-15 02:57:07.443244] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11b27a0/0x11b6c90) succeed. 00:31:04.414 [2024-05-15 02:57:07.456775] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11b3ca0/0x11f8320) succeed. 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:04.414 02:57:07 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:31:04.414 Malloc0 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:04.414 02:57:07 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:04.414 02:57:07 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:04.414 02:57:07 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:31:04.414 [2024-05-15 02:57:07.637319] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:04.414 [2024-05-15 02:57:07.637706] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:04.414 02:57:07 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:04.414 02:57:07 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:31:04.414 02:57:07 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:31:04.414 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:31:04.415 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:31:04.415 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:04.415 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:04.415 { 00:31:04.415 "params": { 00:31:04.415 "name": "Nvme$subsystem", 00:31:04.415 "trtype": "$TEST_TRANSPORT", 00:31:04.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:04.415 "adrfam": "ipv4", 00:31:04.415 "trsvcid": "$NVMF_PORT", 00:31:04.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:04.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:04.415 "hdgst": ${hdgst:-false}, 00:31:04.415 "ddgst": ${ddgst:-false} 00:31:04.415 }, 00:31:04.415 "method": "bdev_nvme_attach_controller" 00:31:04.415 } 00:31:04.415 EOF 00:31:04.415 )") 00:31:04.415 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:31:04.415 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:31:04.415 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:31:04.415 02:57:07 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:04.415 "params": { 00:31:04.415 "name": "Nvme0", 00:31:04.415 "trtype": "rdma", 00:31:04.415 "traddr": "192.168.100.8", 00:31:04.415 "adrfam": "ipv4", 00:31:04.415 "trsvcid": "4420", 00:31:04.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:04.415 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:04.415 "hdgst": false, 00:31:04.415 "ddgst": false 00:31:04.415 }, 00:31:04.415 "method": "bdev_nvme_attach_controller" 00:31:04.415 }' 00:31:04.415 [2024-05-15 02:57:07.690148] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:04.415 [2024-05-15 02:57:07.690215] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943576 ] 00:31:04.674 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.674 [2024-05-15 02:57:07.775028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:04.674 [2024-05-15 02:57:07.820627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:04.674 [2024-05-15 02:57:07.820628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:09.951 bdev Nvme0n1 reports 1 memory domains 00:31:09.951 bdev Nvme0n1 supports RDMA memory domain 00:31:09.951 Initialization complete, running randrw IO for 5 sec on 2 cores 00:31:09.951 ========================================================================== 00:31:09.951 Latency [us] 00:31:09.951 IOPS MiB/s Average min max 00:31:09.951 Core 2: 17006.62 66.43 940.14 398.40 8852.88 00:31:09.951 Core 3: 16114.58 62.95 991.85 454.43 8880.20 00:31:09.951 ========================================================================== 00:31:09.951 Total : 33121.20 129.38 965.30 398.40 8880.20 00:31:09.951 00:31:09.951 Total operations: 165635, translate 165635 pull_push 0 memzero 0 00:31:09.951 02:57:13 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:31:09.951 02:57:13 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:31:09.951 02:57:13 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:31:10.211 [2024-05-15 02:57:13.271940] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:10.211 [2024-05-15 02:57:13.272015] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944302 ] 00:31:10.211 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.211 [2024-05-15 02:57:13.356316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:10.211 [2024-05-15 02:57:13.402094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:10.211 [2024-05-15 02:57:13.402093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.488 bdev Malloc0 reports 2 memory domains 00:31:15.488 bdev Malloc0 doesn't support RDMA memory domain 00:31:15.488 Initialization complete, running randrw IO for 5 sec on 2 cores 00:31:15.488 ========================================================================== 00:31:15.488 Latency [us] 00:31:15.488 IOPS MiB/s Average min max 00:31:15.488 Core 2: 14383.57 56.19 1111.41 370.49 1367.59 00:31:15.488 Core 3: 10339.83 40.39 1546.08 400.18 2479.78 00:31:15.488 ========================================================================== 00:31:15.488 Total : 24723.40 96.58 1293.20 370.49 2479.78 00:31:15.488 00:31:15.488 Total operations: 123686, translate 0 pull_push 494744 memzero 0 00:31:15.488 02:57:18 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:31:15.488 02:57:18 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:31:15.488 02:57:18 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:31:15.488 02:57:18 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:31:15.746 Ignoring -M option 00:31:15.746 [2024-05-15 02:57:18.791282] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:15.746 [2024-05-15 02:57:18.791354] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945023 ] 00:31:15.746 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.746 [2024-05-15 02:57:18.876276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:15.746 [2024-05-15 02:57:18.922168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.746 [2024-05-15 02:57:18.922171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.315 bdev c0978edd-0302-4bc3-970e-fddc2b1a659c reports 1 memory domains 00:31:22.315 bdev c0978edd-0302-4bc3-970e-fddc2b1a659c supports RDMA memory domain 00:31:22.315 Initialization complete, running randread IO for 5 sec on 2 cores 00:31:22.315 ========================================================================== 00:31:22.315 Latency [us] 00:31:22.315 IOPS MiB/s Average min max 00:31:22.315 Core 2: 78544.66 306.82 202.92 68.34 1804.67 00:31:22.315 Core 3: 60144.56 234.94 264.94 70.61 1948.01 00:31:22.315 ========================================================================== 00:31:22.315 Total : 138689.22 541.75 229.82 68.34 1948.01 00:31:22.315 00:31:22.315 Total operations: 693525, translate 0 pull_push 0 memzero 693525 00:31:22.315 02:57:24 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:31:22.315 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.315 [2024-05-15 02:57:24.505331] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:23.694 Initializing NVMe Controllers 00:31:23.694 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:31:23.694 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:23.694 Initialization complete. Launching workers. 00:31:23.694 ======================================================== 00:31:23.694 Latency(us) 00:31:23.694 Device Information : IOPS MiB/s Average min max 00:31:23.694 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2040.49 7.97 7902.24 3277.38 10972.33 00:31:23.694 ======================================================== 00:31:23.694 Total : 2040.49 7.97 7902.24 3277.38 10972.33 00:31:23.694 00:31:23.694 02:57:26 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:31:23.694 02:57:26 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:31:23.694 02:57:26 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:31:23.694 02:57:26 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:31:23.694 [2024-05-15 02:57:26.886884] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:23.695 [2024-05-15 02:57:26.886960] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946025 ] 00:31:23.695 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.695 [2024-05-15 02:57:26.971484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:23.954 [2024-05-15 02:57:27.018605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:23.954 [2024-05-15 02:57:27.018607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:29.262 bdev ce09b959-5986-4436-aeec-576db64f1726 reports 1 memory domains 00:31:29.262 bdev ce09b959-5986-4436-aeec-576db64f1726 supports RDMA memory domain 00:31:29.262 Initialization complete, running randrw IO for 5 sec on 2 cores 00:31:29.262 ========================================================================== 00:31:29.262 Latency [us] 00:31:29.262 IOPS MiB/s Average min max 00:31:29.262 Core 2: 15533.86 60.68 1029.02 14.67 12252.19 00:31:29.262 Core 3: 13456.22 52.56 1187.81 16.53 12655.92 00:31:29.262 ========================================================================== 00:31:29.262 Total : 28990.08 113.24 1102.72 14.67 12655.92 00:31:29.262 00:31:29.262 Total operations: 145004, translate 144905 pull_push 0 memzero 99 00:31:29.262 02:57:32 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:29.262 02:57:32 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:31:29.262 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:29.262 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:31:29.262 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:31:29.262 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:31:29.262 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:31:29.262 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:29.262 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:31:29.262 rmmod nvme_rdma 00:31:29.262 rmmod nvme_fabrics 00:31:29.262 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:29.262 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:31:29.262 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:31:29.262 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 943549 ']' 00:31:29.262 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 943549 00:31:29.262 02:57:32 nvmf_rdma.dma -- common/autotest_common.sh@947 -- # '[' -z 943549 ']' 00:31:29.262 02:57:32 nvmf_rdma.dma -- common/autotest_common.sh@951 -- # kill -0 943549 00:31:29.522 02:57:32 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # uname 00:31:29.522 02:57:32 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:29.522 02:57:32 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 943549 00:31:29.522 02:57:32 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:29.522 02:57:32 nvmf_rdma.dma -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:29.522 02:57:32 nvmf_rdma.dma -- common/autotest_common.sh@965 -- # echo 'killing process with pid 943549' 00:31:29.522 killing process with pid 943549 00:31:29.522 02:57:32 nvmf_rdma.dma -- common/autotest_common.sh@966 -- # kill 943549 00:31:29.522 [2024-05-15 02:57:32.605072] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:29.522 02:57:32 nvmf_rdma.dma -- common/autotest_common.sh@971 -- # wait 943549 00:31:29.522 [2024-05-15 02:57:32.674988] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:31:29.782 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:29.782 02:57:32 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:31:29.782 00:31:29.782 real 0m32.383s 00:31:29.782 user 1m35.663s 00:31:29.782 sys 0m6.355s 00:31:29.782 02:57:32 nvmf_rdma.dma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:29.782 02:57:32 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:31:29.782 ************************************ 00:31:29.782 END TEST dma 00:31:29.782 ************************************ 00:31:29.782 02:57:33 nvmf_rdma -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:31:29.782 02:57:33 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:31:29.782 02:57:33 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:29.782 02:57:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:30.041 ************************************ 00:31:30.041 START TEST nvmf_identify 00:31:30.041 ************************************ 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:31:30.041 * Looking for test storage... 00:31:30.041 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.041 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:31:30.042 02:57:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.625 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.625 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:31:36.625 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:36.625 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:36.625 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:36.625 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:31:36.626 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:31:36.626 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:31:36.626 Found net devices under 0000:18:00.0: mlx_0_0 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:31:36.626 Found net devices under 0000:18:00.1: mlx_0_1 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:31:36.626 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:36.626 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:31:36.626 altname enp24s0f0np0 00:31:36.626 altname ens785f0np0 00:31:36.626 inet 192.168.100.8/24 scope global mlx_0_0 00:31:36.626 valid_lft forever preferred_lft forever 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:31:36.626 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:36.626 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:31:36.626 altname enp24s0f1np1 00:31:36.626 altname ens785f1np1 00:31:36.626 inet 192.168.100.9/24 scope global mlx_0_1 00:31:36.626 valid_lft forever preferred_lft forever 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:31:36.626 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:36.627 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:31:36.946 192.168.100.9' 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:31:36.946 192.168.100.9' 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:31:36.946 192.168.100.9' 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=949725 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 949725 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@828 -- # '[' -z 949725 ']' 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:36.946 02:57:39 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:36.946 [2024-05-15 02:57:40.039490] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:36.946 [2024-05-15 02:57:40.039577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.946 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.946 [2024-05-15 02:57:40.152175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:36.946 [2024-05-15 02:57:40.207385] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.946 [2024-05-15 02:57:40.207438] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.946 [2024-05-15 02:57:40.207452] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.946 [2024-05-15 02:57:40.207465] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.946 [2024-05-15 02:57:40.207476] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.946 [2024-05-15 02:57:40.207554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.946 [2024-05-15 02:57:40.207641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.946 [2024-05-15 02:57:40.207745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:36.946 [2024-05-15 02:57:40.207746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.883 02:57:40 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:37.883 02:57:40 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@861 -- # return 0 00:31:37.883 02:57:40 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:37.883 02:57:40 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:37.883 02:57:40 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:37.883 [2024-05-15 02:57:40.954083] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aa2d70/0x1aa7260) succeed. 00:31:37.883 [2024-05-15 02:57:40.969012] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aa43b0/0x1ae88f0) succeed. 00:31:37.883 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:37.883 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:37.883 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:37.883 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:37.883 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:37.883 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:37.883 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:38.145 Malloc0 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:38.145 [2024-05-15 02:57:41.206659] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:38.145 [2024-05-15 02:57:41.207083] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:38.145 [ 00:31:38.145 { 00:31:38.145 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:38.145 "subtype": "Discovery", 00:31:38.145 "listen_addresses": [ 00:31:38.145 { 00:31:38.145 "trtype": "RDMA", 00:31:38.145 "adrfam": "IPv4", 00:31:38.145 "traddr": "192.168.100.8", 00:31:38.145 "trsvcid": "4420" 00:31:38.145 } 00:31:38.145 ], 00:31:38.145 "allow_any_host": true, 00:31:38.145 "hosts": [] 00:31:38.145 }, 00:31:38.145 { 00:31:38.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:38.145 "subtype": "NVMe", 00:31:38.145 "listen_addresses": [ 00:31:38.145 { 00:31:38.145 "trtype": "RDMA", 00:31:38.145 "adrfam": "IPv4", 00:31:38.145 "traddr": "192.168.100.8", 00:31:38.145 "trsvcid": "4420" 00:31:38.145 } 00:31:38.145 ], 00:31:38.145 "allow_any_host": true, 00:31:38.145 "hosts": [], 00:31:38.145 "serial_number": "SPDK00000000000001", 00:31:38.145 "model_number": "SPDK bdev Controller", 00:31:38.145 "max_namespaces": 32, 00:31:38.145 "min_cntlid": 1, 00:31:38.145 "max_cntlid": 65519, 00:31:38.145 "namespaces": [ 00:31:38.145 { 00:31:38.145 "nsid": 1, 00:31:38.145 "bdev_name": "Malloc0", 00:31:38.145 "name": "Malloc0", 00:31:38.145 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:38.145 "eui64": "ABCDEF0123456789", 00:31:38.145 "uuid": "2bd41e32-ee77-40d5-a8ab-9012f6fc51e1" 00:31:38.145 } 00:31:38.145 ] 00:31:38.145 } 00:31:38.145 ] 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:38.145 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:38.145 [2024-05-15 02:57:41.264018] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:38.145 [2024-05-15 02:57:41.264060] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949888 ] 00:31:38.145 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.145 [2024-05-15 02:57:41.328147] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:31:38.145 [2024-05-15 02:57:41.328258] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:31:38.145 [2024-05-15 02:57:41.328288] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:31:38.145 [2024-05-15 02:57:41.328297] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:31:38.145 [2024-05-15 02:57:41.328335] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:31:38.145 [2024-05-15 02:57:41.346437] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:31:38.145 [2024-05-15 02:57:41.368241] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:38.145 [2024-05-15 02:57:41.368255] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:31:38.145 [2024-05-15 02:57:41.368265] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368277] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368287] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368297] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368306] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368316] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368325] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368335] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368345] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368354] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368364] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368373] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368383] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368393] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368402] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368412] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368421] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368431] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368440] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368450] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368460] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368469] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368483] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183800 00:31:38.145 [2024-05-15 02:57:41.368492] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.368502] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.368512] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.368521] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.368531] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.368540] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.368550] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.368560] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.368569] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:31:38.146 [2024-05-15 02:57:41.368577] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:38.146 [2024-05-15 02:57:41.368584] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:31:38.146 [2024-05-15 02:57:41.368610] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.368629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x183800 00:31:38.146 [2024-05-15 02:57:41.374904] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.146 [2024-05-15 02:57:41.374918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:38.146 [2024-05-15 02:57:41.374929] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.374942] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:38.146 [2024-05-15 02:57:41.374952] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:31:38.146 [2024-05-15 02:57:41.374962] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:31:38.146 [2024-05-15 02:57:41.374980] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.374992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.146 [2024-05-15 02:57:41.375024] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.146 [2024-05-15 02:57:41.375033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:31:38.146 [2024-05-15 02:57:41.375044] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:31:38.146 [2024-05-15 02:57:41.375053] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375063] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:31:38.146 [2024-05-15 02:57:41.375075] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.146 [2024-05-15 02:57:41.375109] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.146 [2024-05-15 02:57:41.375121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:31:38.146 [2024-05-15 02:57:41.375132] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:31:38.146 [2024-05-15 02:57:41.375141] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375152] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:31:38.146 [2024-05-15 02:57:41.375163] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.146 [2024-05-15 02:57:41.375197] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.146 [2024-05-15 02:57:41.375206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:38.146 [2024-05-15 02:57:41.375216] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:38.146 [2024-05-15 02:57:41.375225] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375238] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.146 [2024-05-15 02:57:41.375277] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.146 [2024-05-15 02:57:41.375286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:38.146 [2024-05-15 02:57:41.375296] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:31:38.146 [2024-05-15 02:57:41.375305] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:31:38.146 [2024-05-15 02:57:41.375314] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375324] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:38.146 [2024-05-15 02:57:41.375434] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:31:38.146 [2024-05-15 02:57:41.375443] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:38.146 [2024-05-15 02:57:41.375456] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.146 [2024-05-15 02:57:41.375493] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.146 [2024-05-15 02:57:41.375502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:38.146 [2024-05-15 02:57:41.375511] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:38.146 [2024-05-15 02:57:41.375521] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375533] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.146 [2024-05-15 02:57:41.375573] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.146 [2024-05-15 02:57:41.375581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:38.146 [2024-05-15 02:57:41.375591] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:38.146 [2024-05-15 02:57:41.375600] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:31:38.146 [2024-05-15 02:57:41.375609] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375620] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:31:38.146 [2024-05-15 02:57:41.375633] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:31:38.146 [2024-05-15 02:57:41.375647] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:31:38.146 [2024-05-15 02:57:41.375700] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.146 [2024-05-15 02:57:41.375709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:38.146 [2024-05-15 02:57:41.375721] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:31:38.146 [2024-05-15 02:57:41.375731] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:31:38.146 [2024-05-15 02:57:41.375740] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:31:38.146 [2024-05-15 02:57:41.375750] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:31:38.146 [2024-05-15 02:57:41.375759] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:31:38.146 [2024-05-15 02:57:41.375768] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:31:38.146 [2024-05-15 02:57:41.375778] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375789] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:31:38.146 [2024-05-15 02:57:41.375804] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.146 [2024-05-15 02:57:41.375842] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.146 [2024-05-15 02:57:41.375850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:38.146 [2024-05-15 02:57:41.375863] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x183800 00:31:38.146 [2024-05-15 02:57:41.375874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.146 [2024-05-15 02:57:41.375885] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.375910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.147 [2024-05-15 02:57:41.375922] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.375932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.147 [2024-05-15 02:57:41.375943] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.375954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.147 [2024-05-15 02:57:41.375963] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:31:38.147 [2024-05-15 02:57:41.375972] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.375988] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:38.147 [2024-05-15 02:57:41.375999] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.376011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.147 [2024-05-15 02:57:41.376031] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.147 [2024-05-15 02:57:41.376040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:31:38.147 [2024-05-15 02:57:41.376050] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:31:38.147 [2024-05-15 02:57:41.376060] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:31:38.147 [2024-05-15 02:57:41.376069] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.376083] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.376095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:31:38.147 [2024-05-15 02:57:41.376122] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.147 [2024-05-15 02:57:41.376131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:38.147 [2024-05-15 02:57:41.376142] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.376156] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:31:38.147 [2024-05-15 02:57:41.376192] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.376204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183800 00:31:38.147 [2024-05-15 02:57:41.376216] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.376227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.147 [2024-05-15 02:57:41.376243] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.147 [2024-05-15 02:57:41.376252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:38.147 [2024-05-15 02:57:41.376269] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.376281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183800 00:31:38.147 [2024-05-15 02:57:41.376290] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.376300] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.147 [2024-05-15 02:57:41.376308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:38.147 [2024-05-15 02:57:41.376317] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.376327] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.147 [2024-05-15 02:57:41.376335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:38.147 [2024-05-15 02:57:41.376350] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.376361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183800 00:31:38.147 [2024-05-15 02:57:41.376370] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:31:38.147 [2024-05-15 02:57:41.376394] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.147 [2024-05-15 02:57:41.376403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:38.147 [2024-05-15 02:57:41.376418] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:31:38.147 ===================================================== 00:31:38.147 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:38.147 ===================================================== 00:31:38.147 Controller Capabilities/Features 00:31:38.147 ================================ 00:31:38.147 Vendor ID: 0000 00:31:38.147 Subsystem Vendor ID: 0000 00:31:38.147 Serial Number: .................... 00:31:38.147 Model Number: ........................................ 00:31:38.147 Firmware Version: 24.05 00:31:38.147 Recommended Arb Burst: 0 00:31:38.147 IEEE OUI Identifier: 00 00 00 00:31:38.147 Multi-path I/O 00:31:38.147 May have multiple subsystem ports: No 00:31:38.147 May have multiple controllers: No 00:31:38.147 Associated with SR-IOV VF: No 00:31:38.147 Max Data Transfer Size: 131072 00:31:38.147 Max Number of Namespaces: 0 00:31:38.147 Max Number of I/O Queues: 1024 00:31:38.147 NVMe Specification Version (VS): 1.3 00:31:38.147 NVMe Specification Version (Identify): 1.3 00:31:38.147 Maximum Queue Entries: 128 00:31:38.147 Contiguous Queues Required: Yes 00:31:38.147 Arbitration Mechanisms Supported 00:31:38.147 Weighted Round Robin: Not Supported 00:31:38.147 Vendor Specific: Not Supported 00:31:38.147 Reset Timeout: 15000 ms 00:31:38.147 Doorbell Stride: 4 bytes 00:31:38.147 NVM Subsystem Reset: Not Supported 00:31:38.147 Command Sets Supported 00:31:38.147 NVM Command Set: Supported 00:31:38.147 Boot Partition: Not Supported 00:31:38.147 Memory Page Size Minimum: 4096 bytes 00:31:38.147 Memory Page Size Maximum: 4096 bytes 00:31:38.147 Persistent Memory Region: Not Supported 00:31:38.147 Optional Asynchronous Events Supported 00:31:38.147 Namespace Attribute Notices: Not Supported 00:31:38.147 Firmware Activation Notices: Not Supported 00:31:38.147 ANA Change Notices: Not Supported 00:31:38.147 PLE Aggregate Log Change Notices: Not Supported 00:31:38.147 LBA Status Info Alert Notices: Not Supported 00:31:38.147 EGE Aggregate Log Change Notices: Not Supported 00:31:38.147 Normal NVM Subsystem Shutdown event: Not Supported 00:31:38.147 Zone Descriptor Change Notices: Not Supported 00:31:38.147 Discovery Log Change Notices: Supported 00:31:38.147 Controller Attributes 00:31:38.147 128-bit Host Identifier: Not Supported 00:31:38.147 Non-Operational Permissive Mode: Not Supported 00:31:38.147 NVM Sets: Not Supported 00:31:38.147 Read Recovery Levels: Not Supported 00:31:38.147 Endurance Groups: Not Supported 00:31:38.147 Predictable Latency Mode: Not Supported 00:31:38.147 Traffic Based Keep ALive: Not Supported 00:31:38.147 Namespace Granularity: Not Supported 00:31:38.147 SQ Associations: Not Supported 00:31:38.147 UUID List: Not Supported 00:31:38.147 Multi-Domain Subsystem: Not Supported 00:31:38.147 Fixed Capacity Management: Not Supported 00:31:38.147 Variable Capacity Management: Not Supported 00:31:38.147 Delete Endurance Group: Not Supported 00:31:38.147 Delete NVM Set: Not Supported 00:31:38.147 Extended LBA Formats Supported: Not Supported 00:31:38.147 Flexible Data Placement Supported: Not Supported 00:31:38.147 00:31:38.147 Controller Memory Buffer Support 00:31:38.147 ================================ 00:31:38.147 Supported: No 00:31:38.147 00:31:38.147 Persistent Memory Region Support 00:31:38.147 ================================ 00:31:38.147 Supported: No 00:31:38.147 00:31:38.147 Admin Command Set Attributes 00:31:38.147 ============================ 00:31:38.147 Security Send/Receive: Not Supported 00:31:38.147 Format NVM: Not Supported 00:31:38.147 Firmware Activate/Download: Not Supported 00:31:38.147 Namespace Management: Not Supported 00:31:38.147 Device Self-Test: Not Supported 00:31:38.147 Directives: Not Supported 00:31:38.147 NVMe-MI: Not Supported 00:31:38.147 Virtualization Management: Not Supported 00:31:38.147 Doorbell Buffer Config: Not Supported 00:31:38.147 Get LBA Status Capability: Not Supported 00:31:38.147 Command & Feature Lockdown Capability: Not Supported 00:31:38.147 Abort Command Limit: 1 00:31:38.147 Async Event Request Limit: 4 00:31:38.147 Number of Firmware Slots: N/A 00:31:38.147 Firmware Slot 1 Read-Only: N/A 00:31:38.147 Firmware Activation Without Reset: N/A 00:31:38.147 Multiple Update Detection Support: N/A 00:31:38.147 Firmware Update Granularity: No Information Provided 00:31:38.147 Per-Namespace SMART Log: No 00:31:38.147 Asymmetric Namespace Access Log Page: Not Supported 00:31:38.147 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:38.147 Command Effects Log Page: Not Supported 00:31:38.147 Get Log Page Extended Data: Supported 00:31:38.148 Telemetry Log Pages: Not Supported 00:31:38.148 Persistent Event Log Pages: Not Supported 00:31:38.148 Supported Log Pages Log Page: May Support 00:31:38.148 Commands Supported & Effects Log Page: Not Supported 00:31:38.148 Feature Identifiers & Effects Log Page:May Support 00:31:38.148 NVMe-MI Commands & Effects Log Page: May Support 00:31:38.148 Data Area 4 for Telemetry Log: Not Supported 00:31:38.148 Error Log Page Entries Supported: 128 00:31:38.148 Keep Alive: Not Supported 00:31:38.148 00:31:38.148 NVM Command Set Attributes 00:31:38.148 ========================== 00:31:38.148 Submission Queue Entry Size 00:31:38.148 Max: 1 00:31:38.148 Min: 1 00:31:38.148 Completion Queue Entry Size 00:31:38.148 Max: 1 00:31:38.148 Min: 1 00:31:38.148 Number of Namespaces: 0 00:31:38.148 Compare Command: Not Supported 00:31:38.148 Write Uncorrectable Command: Not Supported 00:31:38.148 Dataset Management Command: Not Supported 00:31:38.148 Write Zeroes Command: Not Supported 00:31:38.148 Set Features Save Field: Not Supported 00:31:38.148 Reservations: Not Supported 00:31:38.148 Timestamp: Not Supported 00:31:38.148 Copy: Not Supported 00:31:38.148 Volatile Write Cache: Not Present 00:31:38.148 Atomic Write Unit (Normal): 1 00:31:38.148 Atomic Write Unit (PFail): 1 00:31:38.148 Atomic Compare & Write Unit: 1 00:31:38.148 Fused Compare & Write: Supported 00:31:38.148 Scatter-Gather List 00:31:38.148 SGL Command Set: Supported 00:31:38.148 SGL Keyed: Supported 00:31:38.148 SGL Bit Bucket Descriptor: Not Supported 00:31:38.148 SGL Metadata Pointer: Not Supported 00:31:38.148 Oversized SGL: Not Supported 00:31:38.148 SGL Metadata Address: Not Supported 00:31:38.148 SGL Offset: Supported 00:31:38.148 Transport SGL Data Block: Not Supported 00:31:38.148 Replay Protected Memory Block: Not Supported 00:31:38.148 00:31:38.148 Firmware Slot Information 00:31:38.148 ========================= 00:31:38.148 Active slot: 0 00:31:38.148 00:31:38.148 00:31:38.148 Error Log 00:31:38.148 ========= 00:31:38.148 00:31:38.148 Active Namespaces 00:31:38.148 ================= 00:31:38.148 Discovery Log Page 00:31:38.148 ================== 00:31:38.148 Generation Counter: 2 00:31:38.148 Number of Records: 2 00:31:38.148 Record Format: 0 00:31:38.148 00:31:38.148 Discovery Log Entry 0 00:31:38.148 ---------------------- 00:31:38.148 Transport Type: 1 (RDMA) 00:31:38.148 Address Family: 1 (IPv4) 00:31:38.148 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:38.148 Entry Flags: 00:31:38.148 Duplicate Returned Information: 1 00:31:38.148 Explicit Persistent Connection Support for Discovery: 1 00:31:38.148 Transport Requirements: 00:31:38.148 Secure Channel: Not Required 00:31:38.148 Port ID: 0 (0x0000) 00:31:38.148 Controller ID: 65535 (0xffff) 00:31:38.148 Admin Max SQ Size: 128 00:31:38.148 Transport Service Identifier: 4420 00:31:38.148 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:38.148 Transport Address: 192.168.100.8 00:31:38.148 Transport Specific Address Subtype - RDMA 00:31:38.148 RDMA QP Service Type: 1 (Reliable Connected) 00:31:38.148 RDMA Provider Type: 1 (No provider specified) 00:31:38.148 RDMA CM Service: 1 (RDMA_CM) 00:31:38.148 Discovery Log Entry 1 00:31:38.148 ---------------------- 00:31:38.148 Transport Type: 1 (RDMA) 00:31:38.148 Address Family: 1 (IPv4) 00:31:38.148 Subsystem Type: 2 (NVM Subsystem) 00:31:38.148 Entry Flags: 00:31:38.148 Duplicate Returned Information: 0 00:31:38.148 Explicit Persistent Connection Support for Discovery: 0 00:31:38.148 Transport Requirements: 00:31:38.148 Secure Channel: Not Required 00:31:38.148 Port ID: 0 (0x0000) 00:31:38.148 Controller ID: 65535 (0xffff) 00:31:38.148 Admin Max SQ Size: [2024-05-15 02:57:41.376523] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:31:38.148 [2024-05-15 02:57:41.376537] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 41138 doesn't match qid 00:31:38.148 [2024-05-15 02:57:41.376557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32621 cdw0:5 sqhd:cef0 p:0 m:0 dnr:0 00:31:38.148 [2024-05-15 02:57:41.376567] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 41138 doesn't match qid 00:31:38.148 [2024-05-15 02:57:41.376579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32621 cdw0:5 sqhd:cef0 p:0 m:0 dnr:0 00:31:38.148 [2024-05-15 02:57:41.376589] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 41138 doesn't match qid 00:31:38.148 [2024-05-15 02:57:41.376602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32621 cdw0:5 sqhd:cef0 p:0 m:0 dnr:0 00:31:38.148 [2024-05-15 02:57:41.376611] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 41138 doesn't match qid 00:31:38.148 [2024-05-15 02:57:41.376623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32621 cdw0:5 sqhd:cef0 p:0 m:0 dnr:0 00:31:38.148 [2024-05-15 02:57:41.376636] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.376648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.148 [2024-05-15 02:57:41.376676] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.148 [2024-05-15 02:57:41.376685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:31:38.148 [2024-05-15 02:57:41.376698] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.376709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.148 [2024-05-15 02:57:41.376721] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.376747] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.148 [2024-05-15 02:57:41.376756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:38.148 [2024-05-15 02:57:41.376766] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:31:38.148 [2024-05-15 02:57:41.376775] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:31:38.148 [2024-05-15 02:57:41.376785] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.376799] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.376811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.148 [2024-05-15 02:57:41.376840] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.148 [2024-05-15 02:57:41.376849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:31:38.148 [2024-05-15 02:57:41.376858] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.376872] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.376885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.148 [2024-05-15 02:57:41.376912] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.148 [2024-05-15 02:57:41.376921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:31:38.148 [2024-05-15 02:57:41.376931] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.376945] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.376956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.148 [2024-05-15 02:57:41.376978] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.148 [2024-05-15 02:57:41.376988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:31:38.148 [2024-05-15 02:57:41.376998] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.377013] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.377025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.148 [2024-05-15 02:57:41.377053] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.148 [2024-05-15 02:57:41.377061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:31:38.148 [2024-05-15 02:57:41.377072] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.377086] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.377097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.148 [2024-05-15 02:57:41.377124] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.148 [2024-05-15 02:57:41.377134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:31:38.148 [2024-05-15 02:57:41.377144] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.377158] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.377170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.148 [2024-05-15 02:57:41.377196] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.148 [2024-05-15 02:57:41.377205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:31:38.148 [2024-05-15 02:57:41.377215] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183800 00:31:38.148 [2024-05-15 02:57:41.377228] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.377261] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.377269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.377279] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377292] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.377333] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.377341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.377351] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377364] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.377400] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.377408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.377418] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377431] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.377466] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.377475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.377485] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377498] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.377539] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.377549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.377559] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377572] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.377610] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.377619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.377628] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377642] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.377685] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.377694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.377703] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377717] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.377749] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.377757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.377767] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377780] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.377815] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.377824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.377834] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377847] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.377882] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.377890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.377904] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377918] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.377963] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.377972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.377981] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.377995] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.378006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.378027] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.378035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.378045] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.378058] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.378070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.378099] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.378107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.378117] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.378130] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.378142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.378168] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.378177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.378186] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.378200] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.378211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.378237] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.149 [2024-05-15 02:57:41.378246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:31:38.149 [2024-05-15 02:57:41.378256] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.378269] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.149 [2024-05-15 02:57:41.378280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.149 [2024-05-15 02:57:41.378301] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.150 [2024-05-15 02:57:41.378310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:38.150 [2024-05-15 02:57:41.378319] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378333] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.150 [2024-05-15 02:57:41.378372] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.150 [2024-05-15 02:57:41.378381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:31:38.150 [2024-05-15 02:57:41.378391] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378404] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.150 [2024-05-15 02:57:41.378439] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.150 [2024-05-15 02:57:41.378448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:31:38.150 [2024-05-15 02:57:41.378457] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378471] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.150 [2024-05-15 02:57:41.378506] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.150 [2024-05-15 02:57:41.378514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:31:38.150 [2024-05-15 02:57:41.378524] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378537] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.150 [2024-05-15 02:57:41.378575] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.150 [2024-05-15 02:57:41.378584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:31:38.150 [2024-05-15 02:57:41.378593] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378606] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.150 [2024-05-15 02:57:41.378642] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.150 [2024-05-15 02:57:41.378650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:31:38.150 [2024-05-15 02:57:41.378660] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378673] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.150 [2024-05-15 02:57:41.378705] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.150 [2024-05-15 02:57:41.378714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:31:38.150 [2024-05-15 02:57:41.378724] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378737] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.150 [2024-05-15 02:57:41.378771] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.150 [2024-05-15 02:57:41.378780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:31:38.150 [2024-05-15 02:57:41.378789] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378803] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.150 [2024-05-15 02:57:41.378843] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.150 [2024-05-15 02:57:41.378852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:31:38.150 [2024-05-15 02:57:41.378862] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378875] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.378886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.150 [2024-05-15 02:57:41.382907] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.150 [2024-05-15 02:57:41.382917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:31:38.150 [2024-05-15 02:57:41.382927] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.382941] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.382953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.150 [2024-05-15 02:57:41.382979] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.150 [2024-05-15 02:57:41.382988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0011 p:0 m:0 dnr:0 00:31:38.150 [2024-05-15 02:57:41.382998] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:31:38.150 [2024-05-15 02:57:41.383008] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:31:38.413 128 00:31:38.413 Transport Service Identifier: 4420 00:31:38.413 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:38.413 Transport Address: 192.168.100.8 00:31:38.413 Transport Specific Address Subtype - RDMA 00:31:38.413 RDMA QP Service Type: 1 (Reliable Connected) 00:31:38.413 RDMA Provider Type: 1 (No provider specified) 00:31:38.413 RDMA CM Service: 1 (RDMA_CM) 00:31:38.413 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:38.413 [2024-05-15 02:57:41.473658] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:38.413 [2024-05-15 02:57:41.473704] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949935 ] 00:31:38.413 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.413 [2024-05-15 02:57:41.535639] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:31:38.413 [2024-05-15 02:57:41.535737] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:31:38.413 [2024-05-15 02:57:41.535761] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:31:38.413 [2024-05-15 02:57:41.535769] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:31:38.413 [2024-05-15 02:57:41.535800] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:31:38.413 [2024-05-15 02:57:41.552439] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:31:38.413 [2024-05-15 02:57:41.573217] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:38.413 [2024-05-15 02:57:41.573232] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:31:38.413 [2024-05-15 02:57:41.573242] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573253] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573263] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573273] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573282] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573292] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573301] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573311] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573321] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573330] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573340] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573349] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573359] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573369] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573378] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573388] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573397] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573407] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573416] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573426] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573436] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573445] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573455] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573464] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573477] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573487] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573497] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573506] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573516] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573525] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573535] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573544] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:31:38.413 [2024-05-15 02:57:41.573552] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:31:38.413 [2024-05-15 02:57:41.573559] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:31:38.413 [2024-05-15 02:57:41.573581] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.573599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x183800 00:31:38.413 [2024-05-15 02:57:41.579903] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.413 [2024-05-15 02:57:41.579916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:38.413 [2024-05-15 02:57:41.579927] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.579939] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:38.413 [2024-05-15 02:57:41.579948] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:31:38.413 [2024-05-15 02:57:41.579958] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:31:38.413 [2024-05-15 02:57:41.579974] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.413 [2024-05-15 02:57:41.579987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.414 [2024-05-15 02:57:41.580003] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.414 [2024-05-15 02:57:41.580012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:31:38.414 [2024-05-15 02:57:41.580022] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:31:38.414 [2024-05-15 02:57:41.580032] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580042] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:31:38.414 [2024-05-15 02:57:41.580054] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.414 [2024-05-15 02:57:41.580080] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.414 [2024-05-15 02:57:41.580089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:31:38.414 [2024-05-15 02:57:41.580099] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:31:38.414 [2024-05-15 02:57:41.580111] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580122] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:31:38.414 [2024-05-15 02:57:41.580133] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.414 [2024-05-15 02:57:41.580163] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.414 [2024-05-15 02:57:41.580171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:38.414 [2024-05-15 02:57:41.580182] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:38.414 [2024-05-15 02:57:41.580191] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580204] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.414 [2024-05-15 02:57:41.580233] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.414 [2024-05-15 02:57:41.580242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:38.414 [2024-05-15 02:57:41.580251] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:31:38.414 [2024-05-15 02:57:41.580260] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:31:38.414 [2024-05-15 02:57:41.580270] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580280] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:38.414 [2024-05-15 02:57:41.580390] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:31:38.414 [2024-05-15 02:57:41.580397] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:38.414 [2024-05-15 02:57:41.580410] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.414 [2024-05-15 02:57:41.580437] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.414 [2024-05-15 02:57:41.580446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:38.414 [2024-05-15 02:57:41.580455] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:38.414 [2024-05-15 02:57:41.580464] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580477] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.414 [2024-05-15 02:57:41.580510] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.414 [2024-05-15 02:57:41.580518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:38.414 [2024-05-15 02:57:41.580530] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:38.414 [2024-05-15 02:57:41.580539] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:31:38.414 [2024-05-15 02:57:41.580548] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580559] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:31:38.414 [2024-05-15 02:57:41.580577] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:31:38.414 [2024-05-15 02:57:41.580591] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:31:38.414 [2024-05-15 02:57:41.580644] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.414 [2024-05-15 02:57:41.580653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:38.414 [2024-05-15 02:57:41.580665] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:31:38.414 [2024-05-15 02:57:41.580675] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:31:38.414 [2024-05-15 02:57:41.580683] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:31:38.414 [2024-05-15 02:57:41.580691] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:31:38.414 [2024-05-15 02:57:41.580700] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:31:38.414 [2024-05-15 02:57:41.580709] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:31:38.414 [2024-05-15 02:57:41.580718] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580729] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:31:38.414 [2024-05-15 02:57:41.580744] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.414 [2024-05-15 02:57:41.580777] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.414 [2024-05-15 02:57:41.580786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:38.414 [2024-05-15 02:57:41.580798] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.414 [2024-05-15 02:57:41.580819] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.414 [2024-05-15 02:57:41.580841] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.414 [2024-05-15 02:57:41.580864] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.414 [2024-05-15 02:57:41.580884] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:38.414 [2024-05-15 02:57:41.580893] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580914] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:38.414 [2024-05-15 02:57:41.580926] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.580937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.414 [2024-05-15 02:57:41.580953] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.414 [2024-05-15 02:57:41.580961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:31:38.414 [2024-05-15 02:57:41.580971] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:31:38.414 [2024-05-15 02:57:41.580981] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:38.414 [2024-05-15 02:57:41.580990] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.581003] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:31:38.414 [2024-05-15 02:57:41.581014] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:38.414 [2024-05-15 02:57:41.581025] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.581036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.414 [2024-05-15 02:57:41.581054] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.414 [2024-05-15 02:57:41.581063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:31:38.414 [2024-05-15 02:57:41.581127] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:31:38.414 [2024-05-15 02:57:41.581137] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:31:38.414 [2024-05-15 02:57:41.581148] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:38.414 [2024-05-15 02:57:41.581161] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183800 00:31:38.415 [2024-05-15 02:57:41.581199] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.415 [2024-05-15 02:57:41.581207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:38.415 [2024-05-15 02:57:41.581226] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:31:38.415 [2024-05-15 02:57:41.581243] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:31:38.415 [2024-05-15 02:57:41.581255] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581267] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:31:38.415 [2024-05-15 02:57:41.581279] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:31:38.415 [2024-05-15 02:57:41.581317] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.415 [2024-05-15 02:57:41.581325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:38.415 [2024-05-15 02:57:41.581343] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:38.415 [2024-05-15 02:57:41.581353] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581364] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:38.415 [2024-05-15 02:57:41.581376] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183800 00:31:38.415 [2024-05-15 02:57:41.581414] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.415 [2024-05-15 02:57:41.581423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:38.415 [2024-05-15 02:57:41.581435] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:38.415 [2024-05-15 02:57:41.581445] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581455] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:31:38.415 [2024-05-15 02:57:41.581468] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:31:38.415 [2024-05-15 02:57:41.581479] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:38.415 [2024-05-15 02:57:41.581488] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:31:38.415 [2024-05-15 02:57:41.581498] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:31:38.415 [2024-05-15 02:57:41.581507] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:31:38.415 [2024-05-15 02:57:41.581516] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:31:38.415 [2024-05-15 02:57:41.581537] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.415 [2024-05-15 02:57:41.581561] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:38.415 [2024-05-15 02:57:41.581589] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.415 [2024-05-15 02:57:41.581598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:38.415 [2024-05-15 02:57:41.581608] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581617] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.415 [2024-05-15 02:57:41.581626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:38.415 [2024-05-15 02:57:41.581635] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581649] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.415 [2024-05-15 02:57:41.581677] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.415 [2024-05-15 02:57:41.581685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:38.415 [2024-05-15 02:57:41.581695] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581709] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.415 [2024-05-15 02:57:41.581737] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.415 [2024-05-15 02:57:41.581745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:38.415 [2024-05-15 02:57:41.581755] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581768] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.415 [2024-05-15 02:57:41.581796] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.415 [2024-05-15 02:57:41.581804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:31:38.415 [2024-05-15 02:57:41.581814] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581830] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183800 00:31:38.415 [2024-05-15 02:57:41.581853] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183800 00:31:38.415 [2024-05-15 02:57:41.581877] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183800 00:31:38.415 [2024-05-15 02:57:41.581904] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183800 00:31:38.415 [2024-05-15 02:57:41.581931] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.415 [2024-05-15 02:57:41.581939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:38.415 [2024-05-15 02:57:41.581956] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581966] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.415 [2024-05-15 02:57:41.581974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:38.415 [2024-05-15 02:57:41.581988] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.581997] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.415 [2024-05-15 02:57:41.582006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:38.415 [2024-05-15 02:57:41.582019] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183800 00:31:38.415 [2024-05-15 02:57:41.582029] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.415 [2024-05-15 02:57:41.582037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:38.415 [2024-05-15 02:57:41.582051] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183800 00:31:38.415 ===================================================== 00:31:38.415 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:38.415 ===================================================== 00:31:38.415 Controller Capabilities/Features 00:31:38.415 ================================ 00:31:38.415 Vendor ID: 8086 00:31:38.415 Subsystem Vendor ID: 8086 00:31:38.415 Serial Number: SPDK00000000000001 00:31:38.415 Model Number: SPDK bdev Controller 00:31:38.415 Firmware Version: 24.05 00:31:38.415 Recommended Arb Burst: 6 00:31:38.415 IEEE OUI Identifier: e4 d2 5c 00:31:38.415 Multi-path I/O 00:31:38.415 May have multiple subsystem ports: Yes 00:31:38.415 May have multiple controllers: Yes 00:31:38.415 Associated with SR-IOV VF: No 00:31:38.415 Max Data Transfer Size: 131072 00:31:38.415 Max Number of Namespaces: 32 00:31:38.415 Max Number of I/O Queues: 127 00:31:38.415 NVMe Specification Version (VS): 1.3 00:31:38.415 NVMe Specification Version (Identify): 1.3 00:31:38.415 Maximum Queue Entries: 128 00:31:38.415 Contiguous Queues Required: Yes 00:31:38.415 Arbitration Mechanisms Supported 00:31:38.415 Weighted Round Robin: Not Supported 00:31:38.415 Vendor Specific: Not Supported 00:31:38.415 Reset Timeout: 15000 ms 00:31:38.415 Doorbell Stride: 4 bytes 00:31:38.415 NVM Subsystem Reset: Not Supported 00:31:38.415 Command Sets Supported 00:31:38.415 NVM Command Set: Supported 00:31:38.415 Boot Partition: Not Supported 00:31:38.415 Memory Page Size Minimum: 4096 bytes 00:31:38.415 Memory Page Size Maximum: 4096 bytes 00:31:38.416 Persistent Memory Region: Not Supported 00:31:38.416 Optional Asynchronous Events Supported 00:31:38.416 Namespace Attribute Notices: Supported 00:31:38.416 Firmware Activation Notices: Not Supported 00:31:38.416 ANA Change Notices: Not Supported 00:31:38.416 PLE Aggregate Log Change Notices: Not Supported 00:31:38.416 LBA Status Info Alert Notices: Not Supported 00:31:38.416 EGE Aggregate Log Change Notices: Not Supported 00:31:38.416 Normal NVM Subsystem Shutdown event: Not Supported 00:31:38.416 Zone Descriptor Change Notices: Not Supported 00:31:38.416 Discovery Log Change Notices: Not Supported 00:31:38.416 Controller Attributes 00:31:38.416 128-bit Host Identifier: Supported 00:31:38.416 Non-Operational Permissive Mode: Not Supported 00:31:38.416 NVM Sets: Not Supported 00:31:38.416 Read Recovery Levels: Not Supported 00:31:38.416 Endurance Groups: Not Supported 00:31:38.416 Predictable Latency Mode: Not Supported 00:31:38.416 Traffic Based Keep ALive: Not Supported 00:31:38.416 Namespace Granularity: Not Supported 00:31:38.416 SQ Associations: Not Supported 00:31:38.416 UUID List: Not Supported 00:31:38.416 Multi-Domain Subsystem: Not Supported 00:31:38.416 Fixed Capacity Management: Not Supported 00:31:38.416 Variable Capacity Management: Not Supported 00:31:38.416 Delete Endurance Group: Not Supported 00:31:38.416 Delete NVM Set: Not Supported 00:31:38.416 Extended LBA Formats Supported: Not Supported 00:31:38.416 Flexible Data Placement Supported: Not Supported 00:31:38.416 00:31:38.416 Controller Memory Buffer Support 00:31:38.416 ================================ 00:31:38.416 Supported: No 00:31:38.416 00:31:38.416 Persistent Memory Region Support 00:31:38.416 ================================ 00:31:38.416 Supported: No 00:31:38.416 00:31:38.416 Admin Command Set Attributes 00:31:38.416 ============================ 00:31:38.416 Security Send/Receive: Not Supported 00:31:38.416 Format NVM: Not Supported 00:31:38.416 Firmware Activate/Download: Not Supported 00:31:38.416 Namespace Management: Not Supported 00:31:38.416 Device Self-Test: Not Supported 00:31:38.416 Directives: Not Supported 00:31:38.416 NVMe-MI: Not Supported 00:31:38.416 Virtualization Management: Not Supported 00:31:38.416 Doorbell Buffer Config: Not Supported 00:31:38.416 Get LBA Status Capability: Not Supported 00:31:38.416 Command & Feature Lockdown Capability: Not Supported 00:31:38.416 Abort Command Limit: 4 00:31:38.416 Async Event Request Limit: 4 00:31:38.416 Number of Firmware Slots: N/A 00:31:38.416 Firmware Slot 1 Read-Only: N/A 00:31:38.416 Firmware Activation Without Reset: N/A 00:31:38.416 Multiple Update Detection Support: N/A 00:31:38.416 Firmware Update Granularity: No Information Provided 00:31:38.416 Per-Namespace SMART Log: No 00:31:38.416 Asymmetric Namespace Access Log Page: Not Supported 00:31:38.416 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:38.416 Command Effects Log Page: Supported 00:31:38.416 Get Log Page Extended Data: Supported 00:31:38.416 Telemetry Log Pages: Not Supported 00:31:38.416 Persistent Event Log Pages: Not Supported 00:31:38.416 Supported Log Pages Log Page: May Support 00:31:38.416 Commands Supported & Effects Log Page: Not Supported 00:31:38.416 Feature Identifiers & Effects Log Page:May Support 00:31:38.416 NVMe-MI Commands & Effects Log Page: May Support 00:31:38.416 Data Area 4 for Telemetry Log: Not Supported 00:31:38.416 Error Log Page Entries Supported: 128 00:31:38.416 Keep Alive: Supported 00:31:38.416 Keep Alive Granularity: 10000 ms 00:31:38.416 00:31:38.416 NVM Command Set Attributes 00:31:38.416 ========================== 00:31:38.416 Submission Queue Entry Size 00:31:38.416 Max: 64 00:31:38.416 Min: 64 00:31:38.416 Completion Queue Entry Size 00:31:38.416 Max: 16 00:31:38.416 Min: 16 00:31:38.416 Number of Namespaces: 32 00:31:38.416 Compare Command: Supported 00:31:38.416 Write Uncorrectable Command: Not Supported 00:31:38.416 Dataset Management Command: Supported 00:31:38.416 Write Zeroes Command: Supported 00:31:38.416 Set Features Save Field: Not Supported 00:31:38.416 Reservations: Supported 00:31:38.416 Timestamp: Not Supported 00:31:38.416 Copy: Supported 00:31:38.416 Volatile Write Cache: Present 00:31:38.416 Atomic Write Unit (Normal): 1 00:31:38.416 Atomic Write Unit (PFail): 1 00:31:38.416 Atomic Compare & Write Unit: 1 00:31:38.416 Fused Compare & Write: Supported 00:31:38.416 Scatter-Gather List 00:31:38.416 SGL Command Set: Supported 00:31:38.416 SGL Keyed: Supported 00:31:38.416 SGL Bit Bucket Descriptor: Not Supported 00:31:38.416 SGL Metadata Pointer: Not Supported 00:31:38.416 Oversized SGL: Not Supported 00:31:38.416 SGL Metadata Address: Not Supported 00:31:38.416 SGL Offset: Supported 00:31:38.416 Transport SGL Data Block: Not Supported 00:31:38.416 Replay Protected Memory Block: Not Supported 00:31:38.416 00:31:38.416 Firmware Slot Information 00:31:38.416 ========================= 00:31:38.416 Active slot: 1 00:31:38.416 Slot 1 Firmware Revision: 24.05 00:31:38.416 00:31:38.416 00:31:38.416 Commands Supported and Effects 00:31:38.416 ============================== 00:31:38.416 Admin Commands 00:31:38.416 -------------- 00:31:38.416 Get Log Page (02h): Supported 00:31:38.416 Identify (06h): Supported 00:31:38.416 Abort (08h): Supported 00:31:38.416 Set Features (09h): Supported 00:31:38.416 Get Features (0Ah): Supported 00:31:38.416 Asynchronous Event Request (0Ch): Supported 00:31:38.416 Keep Alive (18h): Supported 00:31:38.416 I/O Commands 00:31:38.416 ------------ 00:31:38.416 Flush (00h): Supported LBA-Change 00:31:38.416 Write (01h): Supported LBA-Change 00:31:38.416 Read (02h): Supported 00:31:38.416 Compare (05h): Supported 00:31:38.416 Write Zeroes (08h): Supported LBA-Change 00:31:38.416 Dataset Management (09h): Supported LBA-Change 00:31:38.416 Copy (19h): Supported LBA-Change 00:31:38.416 Unknown (79h): Supported LBA-Change 00:31:38.416 Unknown (7Ah): Supported 00:31:38.416 00:31:38.416 Error Log 00:31:38.416 ========= 00:31:38.416 00:31:38.416 Arbitration 00:31:38.416 =========== 00:31:38.416 Arbitration Burst: 1 00:31:38.416 00:31:38.416 Power Management 00:31:38.416 ================ 00:31:38.416 Number of Power States: 1 00:31:38.416 Current Power State: Power State #0 00:31:38.416 Power State #0: 00:31:38.416 Max Power: 0.00 W 00:31:38.416 Non-Operational State: Operational 00:31:38.416 Entry Latency: Not Reported 00:31:38.416 Exit Latency: Not Reported 00:31:38.416 Relative Read Throughput: 0 00:31:38.416 Relative Read Latency: 0 00:31:38.416 Relative Write Throughput: 0 00:31:38.416 Relative Write Latency: 0 00:31:38.416 Idle Power: Not Reported 00:31:38.416 Active Power: Not Reported 00:31:38.416 Non-Operational Permissive Mode: Not Supported 00:31:38.416 00:31:38.416 Health Information 00:31:38.416 ================== 00:31:38.416 Critical Warnings: 00:31:38.416 Available Spare Space: OK 00:31:38.416 Temperature: OK 00:31:38.416 Device Reliability: OK 00:31:38.416 Read Only: No 00:31:38.416 Volatile Memory Backup: OK 00:31:38.416 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:38.416 Temperature Threshold: [2024-05-15 02:57:41.582167] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x183800 00:31:38.416 [2024-05-15 02:57:41.582181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.416 [2024-05-15 02:57:41.582197] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.416 [2024-05-15 02:57:41.582206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:38.416 [2024-05-15 02:57:41.582216] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183800 00:31:38.416 [2024-05-15 02:57:41.582253] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:31:38.416 [2024-05-15 02:57:41.582266] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 18395 doesn't match qid 00:31:38.416 [2024-05-15 02:57:41.582286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32681 cdw0:5 sqhd:0ef0 p:0 m:0 dnr:0 00:31:38.416 [2024-05-15 02:57:41.582296] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 18395 doesn't match qid 00:31:38.416 [2024-05-15 02:57:41.582309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32681 cdw0:5 sqhd:0ef0 p:0 m:0 dnr:0 00:31:38.416 [2024-05-15 02:57:41.582318] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 18395 doesn't match qid 00:31:38.416 [2024-05-15 02:57:41.582331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32681 cdw0:5 sqhd:0ef0 p:0 m:0 dnr:0 00:31:38.416 [2024-05-15 02:57:41.582340] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 18395 doesn't match qid 00:31:38.416 [2024-05-15 02:57:41.582352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32681 cdw0:5 sqhd:0ef0 p:0 m:0 dnr:0 00:31:38.416 [2024-05-15 02:57:41.582365] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183800 00:31:38.416 [2024-05-15 02:57:41.582377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.416 [2024-05-15 02:57:41.582396] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.416 [2024-05-15 02:57:41.582405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:31:38.416 [2024-05-15 02:57:41.582417] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.582439] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582450] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.582458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.582468] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:31:38.417 [2024-05-15 02:57:41.582477] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:31:38.417 [2024-05-15 02:57:41.582486] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582499] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.582529] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.582537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.582548] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582561] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.582588] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.582597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.582608] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582622] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.582652] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.582662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.582673] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582686] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.582717] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.582726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.582736] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582752] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.582780] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.582788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.582798] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582811] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.582844] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.582852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.582862] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582876] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.582911] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.582919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.582929] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582943] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.582954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.582975] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.582983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.582993] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.583006] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.583018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.583041] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.583050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.583059] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.583073] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.583084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.583102] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.583110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.583122] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.583135] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.583147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.583165] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.583173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.583183] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.583196] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.583208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.583226] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.583234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.583244] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.583257] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.583268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.417 [2024-05-15 02:57:41.583286] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.417 [2024-05-15 02:57:41.583295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:31:38.417 [2024-05-15 02:57:41.583304] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.583317] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.417 [2024-05-15 02:57:41.583329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.418 [2024-05-15 02:57:41.583347] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.418 [2024-05-15 02:57:41.583355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:31:38.418 [2024-05-15 02:57:41.583365] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583378] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.418 [2024-05-15 02:57:41.583410] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.418 [2024-05-15 02:57:41.583419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:31:38.418 [2024-05-15 02:57:41.583428] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583441] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.418 [2024-05-15 02:57:41.583471] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.418 [2024-05-15 02:57:41.583479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:31:38.418 [2024-05-15 02:57:41.583491] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583504] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.418 [2024-05-15 02:57:41.583531] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.418 [2024-05-15 02:57:41.583539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:31:38.418 [2024-05-15 02:57:41.583549] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583562] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.418 [2024-05-15 02:57:41.583591] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.418 [2024-05-15 02:57:41.583599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:31:38.418 [2024-05-15 02:57:41.583609] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583622] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.418 [2024-05-15 02:57:41.583652] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.418 [2024-05-15 02:57:41.583660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:31:38.418 [2024-05-15 02:57:41.583670] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583683] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.418 [2024-05-15 02:57:41.583709] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.418 [2024-05-15 02:57:41.583718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:31:38.418 [2024-05-15 02:57:41.583727] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583741] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.418 [2024-05-15 02:57:41.583767] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.418 [2024-05-15 02:57:41.583775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:31:38.418 [2024-05-15 02:57:41.583785] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583798] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.418 [2024-05-15 02:57:41.583828] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.418 [2024-05-15 02:57:41.583838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:31:38.418 [2024-05-15 02:57:41.583848] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583861] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.583872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.418 [2024-05-15 02:57:41.583890] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.418 [2024-05-15 02:57:41.587907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:31:38.418 [2024-05-15 02:57:41.587917] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.587931] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.587943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:31:38.418 [2024-05-15 02:57:41.587964] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:31:38.418 [2024-05-15 02:57:41.587972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0012 p:0 m:0 dnr:0 00:31:38.418 [2024-05-15 02:57:41.587982] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183800 00:31:38.418 [2024-05-15 02:57:41.587992] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:31:38.418 0 Kelvin (-273 Celsius) 00:31:38.418 Available Spare: 0% 00:31:38.418 Available Spare Threshold: 0% 00:31:38.418 Life Percentage Used: 0% 00:31:38.418 Data Units Read: 0 00:31:38.418 Data Units Written: 0 00:31:38.418 Host Read Commands: 0 00:31:38.418 Host Write Commands: 0 00:31:38.418 Controller Busy Time: 0 minutes 00:31:38.418 Power Cycles: 0 00:31:38.418 Power On Hours: 0 hours 00:31:38.418 Unsafe Shutdowns: 0 00:31:38.418 Unrecoverable Media Errors: 0 00:31:38.418 Lifetime Error Log Entries: 0 00:31:38.418 Warning Temperature Time: 0 minutes 00:31:38.418 Critical Temperature Time: 0 minutes 00:31:38.418 00:31:38.418 Number of Queues 00:31:38.418 ================ 00:31:38.418 Number of I/O Submission Queues: 127 00:31:38.418 Number of I/O Completion Queues: 127 00:31:38.418 00:31:38.418 Active Namespaces 00:31:38.418 ================= 00:31:38.418 Namespace ID:1 00:31:38.418 Error Recovery Timeout: Unlimited 00:31:38.418 Command Set Identifier: NVM (00h) 00:31:38.418 Deallocate: Supported 00:31:38.418 Deallocated/Unwritten Error: Not Supported 00:31:38.418 Deallocated Read Value: Unknown 00:31:38.418 Deallocate in Write Zeroes: Not Supported 00:31:38.418 Deallocated Guard Field: 0xFFFF 00:31:38.418 Flush: Supported 00:31:38.418 Reservation: Supported 00:31:38.418 Namespace Sharing Capabilities: Multiple Controllers 00:31:38.418 Size (in LBAs): 131072 (0GiB) 00:31:38.418 Capacity (in LBAs): 131072 (0GiB) 00:31:38.418 Utilization (in LBAs): 131072 (0GiB) 00:31:38.418 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:38.418 EUI64: ABCDEF0123456789 00:31:38.418 UUID: 2bd41e32-ee77-40d5-a8ab-9012f6fc51e1 00:31:38.418 Thin Provisioning: Not Supported 00:31:38.418 Per-NS Atomic Units: Yes 00:31:38.418 Atomic Boundary Size (Normal): 0 00:31:38.418 Atomic Boundary Size (PFail): 0 00:31:38.418 Atomic Boundary Offset: 0 00:31:38.418 Maximum Single Source Range Length: 65535 00:31:38.418 Maximum Copy Length: 65535 00:31:38.418 Maximum Source Range Count: 1 00:31:38.418 NGUID/EUI64 Never Reused: No 00:31:38.418 Namespace Write Protected: No 00:31:38.418 Number of LBA Formats: 1 00:31:38.418 Current LBA Format: LBA Format #00 00:31:38.418 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:38.418 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:38.418 02:57:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:31:38.418 rmmod nvme_rdma 00:31:38.418 rmmod nvme_fabrics 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 949725 ']' 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 949725 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@947 -- # '[' -z 949725 ']' 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@951 -- # kill -0 949725 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # uname 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 949725 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@965 -- # echo 'killing process with pid 949725' 00:31:38.679 killing process with pid 949725 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@966 -- # kill 949725 00:31:38.679 [2024-05-15 02:57:41.779570] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:38.679 02:57:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@971 -- # wait 949725 00:31:38.679 [2024-05-15 02:57:41.886424] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:31:38.939 02:57:42 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:38.939 02:57:42 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:31:38.939 00:31:38.939 real 0m9.027s 00:31:38.939 user 0m9.410s 00:31:38.939 sys 0m5.719s 00:31:38.939 02:57:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:38.939 02:57:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:38.939 ************************************ 00:31:38.939 END TEST nvmf_identify 00:31:38.939 ************************************ 00:31:38.939 02:57:42 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:31:38.939 02:57:42 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:31:38.939 02:57:42 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:38.939 02:57:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:38.939 ************************************ 00:31:38.939 START TEST nvmf_perf 00:31:38.939 ************************************ 00:31:38.939 02:57:42 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:31:39.198 * Looking for test storage... 00:31:39.198 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.198 02:57:42 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:31:39.199 02:57:42 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:45.770 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:45.770 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:31:45.770 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:45.770 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:45.770 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:45.770 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:31:45.771 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:31:45.771 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:31:45.771 Found net devices under 0000:18:00.0: mlx_0_0 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:31:45.771 Found net devices under 0000:18:00.1: mlx_0_1 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:31:45.771 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:45.771 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:31:45.771 altname enp24s0f0np0 00:31:45.771 altname ens785f0np0 00:31:45.771 inet 192.168.100.8/24 scope global mlx_0_0 00:31:45.771 valid_lft forever preferred_lft forever 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:31:45.771 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:45.771 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:31:45.771 altname enp24s0f1np1 00:31:45.771 altname ens785f1np1 00:31:45.771 inet 192.168.100.9/24 scope global mlx_0_1 00:31:45.771 valid_lft forever preferred_lft forever 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:45.771 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:31:45.772 192.168.100.9' 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:31:45.772 192.168.100.9' 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:31:45.772 192.168.100.9' 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=952829 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 952829 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@828 -- # '[' -z 952829 ']' 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:45.772 02:57:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:45.772 [2024-05-15 02:57:48.858640] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:31:45.772 [2024-05-15 02:57:48.858714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:45.772 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.772 [2024-05-15 02:57:48.966697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:45.772 [2024-05-15 02:57:49.018861] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:45.772 [2024-05-15 02:57:49.018915] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:45.772 [2024-05-15 02:57:49.018929] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:45.772 [2024-05-15 02:57:49.018943] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:45.772 [2024-05-15 02:57:49.018953] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:45.772 [2024-05-15 02:57:49.019006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.772 [2024-05-15 02:57:49.019093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:45.772 [2024-05-15 02:57:49.019195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.772 [2024-05-15 02:57:49.019195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:46.031 02:57:49 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:46.031 02:57:49 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@861 -- # return 0 00:31:46.031 02:57:49 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:46.031 02:57:49 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:46.031 02:57:49 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:46.031 02:57:49 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.031 02:57:49 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:46.032 02:57:49 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:46.599 02:57:49 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:46.599 02:57:49 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:46.858 02:57:49 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:31:46.858 02:57:49 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:47.116 02:57:50 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:47.116 02:57:50 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:31:47.116 02:57:50 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:47.116 02:57:50 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:31:47.116 02:57:50 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:31:47.375 [2024-05-15 02:57:50.464033] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:31:47.375 [2024-05-15 02:57:50.492353] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1146ca0/0x1175700) succeed. 00:31:47.375 [2024-05-15 02:57:50.507730] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11482e0/0x11d5700) succeed. 00:31:47.375 02:57:50 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:47.633 02:57:50 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:47.633 02:57:50 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:47.892 02:57:51 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:47.892 02:57:51 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:48.151 02:57:51 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:48.410 [2024-05-15 02:57:51.641833] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:48.410 [2024-05-15 02:57:51.642272] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:48.410 02:57:51 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:48.669 02:57:51 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:31:48.669 02:57:51 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:48.669 02:57:51 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:48.669 02:57:51 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:50.049 Initializing NVMe Controllers 00:31:50.049 Attached to NVMe Controller at 0000:5e:00.0 [144d:a80a] 00:31:50.049 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:31:50.049 Initialization complete. Launching workers. 00:31:50.049 ======================================================== 00:31:50.049 Latency(us) 00:31:50.049 Device Information : IOPS MiB/s Average min max 00:31:50.049 PCIE (0000:5e:00.0) NSID 1 from core 0: 70517.15 275.46 453.05 14.61 5236.20 00:31:50.049 ======================================================== 00:31:50.049 Total : 70517.15 275.46 453.05 14.61 5236.20 00:31:50.049 00:31:50.049 02:57:53 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:50.049 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.341 Initializing NVMe Controllers 00:31:53.341 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:53.341 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:53.341 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:53.341 Initialization complete. Launching workers. 00:31:53.341 ======================================================== 00:31:53.341 Latency(us) 00:31:53.341 Device Information : IOPS MiB/s Average min max 00:31:53.341 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5021.99 19.62 198.81 65.29 4318.93 00:31:53.341 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4765.00 18.61 209.56 78.98 4250.58 00:31:53.341 ======================================================== 00:31:53.341 Total : 9786.99 38.23 204.05 65.29 4318.93 00:31:53.341 00:31:53.341 02:57:56 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:53.600 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.892 Initializing NVMe Controllers 00:31:56.892 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:56.892 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:56.892 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:56.892 Initialization complete. Launching workers. 00:31:56.892 ======================================================== 00:31:56.893 Latency(us) 00:31:56.893 Device Information : IOPS MiB/s Average min max 00:31:56.893 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13112.91 51.22 2439.94 708.53 6186.22 00:31:56.893 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4017.57 15.69 7963.79 6692.37 9270.05 00:31:56.893 ======================================================== 00:31:56.893 Total : 17130.48 66.92 3735.43 708.53 9270.05 00:31:56.893 00:31:56.893 02:58:00 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:31:56.893 02:58:00 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:56.893 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.167 Initializing NVMe Controllers 00:32:02.167 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:02.167 Controller IO queue size 128, less than required. 00:32:02.167 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:02.167 Controller IO queue size 128, less than required. 00:32:02.167 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:02.167 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:02.167 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:02.167 Initialization complete. Launching workers. 00:32:02.167 ======================================================== 00:32:02.167 Latency(us) 00:32:02.167 Device Information : IOPS MiB/s Average min max 00:32:02.167 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2566.00 641.50 50261.48 18353.97 102760.47 00:32:02.167 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3476.50 869.12 36445.55 15370.70 57694.63 00:32:02.167 ======================================================== 00:32:02.167 Total : 6042.49 1510.62 42312.60 15370.70 102760.47 00:32:02.167 00:32:02.168 02:58:04 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:32:02.168 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.168 No valid NVMe controllers or AIO or URING devices found 00:32:02.168 Initializing NVMe Controllers 00:32:02.168 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:02.168 Controller IO queue size 128, less than required. 00:32:02.168 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:02.168 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:32:02.168 Controller IO queue size 128, less than required. 00:32:02.168 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:02.168 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:32:02.168 WARNING: Some requested NVMe devices were skipped 00:32:02.168 02:58:04 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:32:02.168 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.399 Initializing NVMe Controllers 00:32:06.399 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:06.399 Controller IO queue size 128, less than required. 00:32:06.400 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:06.400 Controller IO queue size 128, less than required. 00:32:06.400 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:06.400 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:06.400 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:06.400 Initialization complete. Launching workers. 00:32:06.400 00:32:06.400 ==================== 00:32:06.400 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:06.400 RDMA transport: 00:32:06.400 dev name: mlx5_0 00:32:06.400 polls: 261563 00:32:06.400 idle_polls: 259052 00:32:06.400 completions: 33034 00:32:06.400 queued_requests: 1 00:32:06.400 total_send_wrs: 16517 00:32:06.400 send_doorbell_updates: 2253 00:32:06.400 total_recv_wrs: 16644 00:32:06.400 recv_doorbell_updates: 2261 00:32:06.400 --------------------------------- 00:32:06.400 00:32:06.400 ==================== 00:32:06.400 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:06.400 RDMA transport: 00:32:06.400 dev name: mlx5_0 00:32:06.400 polls: 265302 00:32:06.400 idle_polls: 265039 00:32:06.400 completions: 14354 00:32:06.400 queued_requests: 1 00:32:06.400 total_send_wrs: 7177 00:32:06.400 send_doorbell_updates: 252 00:32:06.400 total_recv_wrs: 7304 00:32:06.400 recv_doorbell_updates: 253 00:32:06.400 --------------------------------- 00:32:06.400 ======================================================== 00:32:06.400 Latency(us) 00:32:06.400 Device Information : IOPS MiB/s Average min max 00:32:06.400 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4129.00 1032.25 31088.94 15756.23 76071.11 00:32:06.400 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1794.00 448.50 71178.39 39910.14 103870.79 00:32:06.400 ======================================================== 00:32:06.400 Total : 5922.99 1480.75 43231.52 15756.23 103870.79 00:32:06.400 00:32:06.400 02:58:09 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:32:06.400 02:58:09 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:06.400 02:58:09 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:06.400 02:58:09 nvmf_rdma.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:32:06.400 02:58:09 nvmf_rdma.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:07.780 02:58:10 nvmf_rdma.nvmf_perf -- host/perf.sh@72 -- # ls_guid=9b03839d-fc4d-4014-a44a-7dec00d6d845 00:32:07.780 02:58:10 nvmf_rdma.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 9b03839d-fc4d-4014-a44a-7dec00d6d845 00:32:07.780 02:58:10 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_uuid=9b03839d-fc4d-4014-a44a-7dec00d6d845 00:32:07.780 02:58:10 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_info 00:32:07.780 02:58:10 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1363 -- # local fc 00:32:07.780 02:58:10 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # local cs 00:32:07.780 02:58:10 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:07.780 02:58:10 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:32:07.780 { 00:32:07.780 "uuid": "9b03839d-fc4d-4014-a44a-7dec00d6d845", 00:32:07.780 "name": "lvs_0", 00:32:07.780 "base_bdev": "Nvme0n1", 00:32:07.780 "total_data_clusters": 457407, 00:32:07.780 "free_clusters": 457407, 00:32:07.780 "block_size": 512, 00:32:07.780 "cluster_size": 4194304 00:32:07.780 } 00:32:07.780 ]' 00:32:07.780 02:58:10 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="9b03839d-fc4d-4014-a44a-7dec00d6d845") .free_clusters' 00:32:07.780 02:58:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # fc=457407 00:32:07.780 02:58:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="9b03839d-fc4d-4014-a44a-7dec00d6d845") .cluster_size' 00:32:07.780 02:58:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1367 -- # cs=4194304 00:32:07.780 02:58:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1370 -- # free_mb=1829628 00:32:08.040 02:58:11 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1371 -- # echo 1829628 00:32:08.040 1829628 00:32:08.040 02:58:11 nvmf_rdma.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:32:08.040 02:58:11 nvmf_rdma.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:32:08.040 02:58:11 nvmf_rdma.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9b03839d-fc4d-4014-a44a-7dec00d6d845 lbd_0 20480 00:32:08.300 02:58:11 nvmf_rdma.nvmf_perf -- host/perf.sh@80 -- # lb_guid=eb7038a0-eeaf-43d1-aeb4-bcc00cac821a 00:32:08.300 02:58:11 nvmf_rdma.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore eb7038a0-eeaf-43d1-aeb4-bcc00cac821a lvs_n_0 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=5cd9cbfb-aafd-4e9f-8b0f-a4c9fc474618 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 5cd9cbfb-aafd-4e9f-8b0f-a4c9fc474618 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_uuid=5cd9cbfb-aafd-4e9f-8b0f-a4c9fc474618 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_info 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1363 -- # local fc 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # local cs 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:32:10.207 { 00:32:10.207 "uuid": "9b03839d-fc4d-4014-a44a-7dec00d6d845", 00:32:10.207 "name": "lvs_0", 00:32:10.207 "base_bdev": "Nvme0n1", 00:32:10.207 "total_data_clusters": 457407, 00:32:10.207 "free_clusters": 452287, 00:32:10.207 "block_size": 512, 00:32:10.207 "cluster_size": 4194304 00:32:10.207 }, 00:32:10.207 { 00:32:10.207 "uuid": "5cd9cbfb-aafd-4e9f-8b0f-a4c9fc474618", 00:32:10.207 "name": "lvs_n_0", 00:32:10.207 "base_bdev": "eb7038a0-eeaf-43d1-aeb4-bcc00cac821a", 00:32:10.207 "total_data_clusters": 5114, 00:32:10.207 "free_clusters": 5114, 00:32:10.207 "block_size": 512, 00:32:10.207 "cluster_size": 4194304 00:32:10.207 } 00:32:10.207 ]' 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="5cd9cbfb-aafd-4e9f-8b0f-a4c9fc474618") .free_clusters' 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # fc=5114 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="5cd9cbfb-aafd-4e9f-8b0f-a4c9fc474618") .cluster_size' 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1367 -- # cs=4194304 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1370 -- # free_mb=20456 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1371 -- # echo 20456 00:32:10.207 20456 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:32:10.207 02:58:13 nvmf_rdma.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5cd9cbfb-aafd-4e9f-8b0f-a4c9fc474618 lbd_nest_0 20456 00:32:10.466 02:58:13 nvmf_rdma.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=7b1d0fde-98de-4f0c-bc6f-e0cf840e3cb9 00:32:10.466 02:58:13 nvmf_rdma.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:10.725 02:58:13 nvmf_rdma.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:10.725 02:58:13 nvmf_rdma.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 7b1d0fde-98de-4f0c-bc6f-e0cf840e3cb9 00:32:10.985 02:58:14 nvmf_rdma.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:11.244 02:58:14 nvmf_rdma.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:11.244 02:58:14 nvmf_rdma.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:11.244 02:58:14 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:11.244 02:58:14 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:11.244 02:58:14 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:11.244 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.458 Initializing NVMe Controllers 00:32:23.458 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:23.458 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:23.458 Initialization complete. Launching workers. 00:32:23.458 ======================================================== 00:32:23.458 Latency(us) 00:32:23.458 Device Information : IOPS MiB/s Average min max 00:32:23.458 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4227.46 2.06 236.14 96.92 8093.25 00:32:23.459 ======================================================== 00:32:23.459 Total : 4227.46 2.06 236.14 96.92 8093.25 00:32:23.459 00:32:23.459 02:58:25 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:23.459 02:58:25 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:23.459 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.674 Initializing NVMe Controllers 00:32:35.674 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:35.674 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:35.674 Initialization complete. Launching workers. 00:32:35.674 ======================================================== 00:32:35.674 Latency(us) 00:32:35.674 Device Information : IOPS MiB/s Average min max 00:32:35.674 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2579.00 322.38 387.21 160.96 7158.03 00:32:35.674 ======================================================== 00:32:35.674 Total : 2579.00 322.38 387.21 160.96 7158.03 00:32:35.674 00:32:35.674 02:58:37 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:35.674 02:58:37 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:35.674 02:58:37 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:35.674 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.659 Initializing NVMe Controllers 00:32:45.659 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:45.659 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:45.659 Initialization complete. Launching workers. 00:32:45.659 ======================================================== 00:32:45.659 Latency(us) 00:32:45.659 Device Information : IOPS MiB/s Average min max 00:32:45.659 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8358.40 4.08 3827.68 1533.46 9947.95 00:32:45.659 ======================================================== 00:32:45.659 Total : 8358.40 4.08 3827.68 1533.46 9947.95 00:32:45.659 00:32:45.659 02:58:48 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:45.659 02:58:48 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:45.659 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.883 Initializing NVMe Controllers 00:32:57.883 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:32:57.883 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:57.883 Initialization complete. Launching workers. 00:32:57.883 ======================================================== 00:32:57.883 Latency(us) 00:32:57.883 Device Information : IOPS MiB/s Average min max 00:32:57.883 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3438.70 429.84 9311.65 5940.28 22898.95 00:32:57.883 ======================================================== 00:32:57.883 Total : 3438.70 429.84 9311.65 5940.28 22898.95 00:32:57.883 00:32:57.883 02:59:00 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:57.883 02:59:00 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:57.883 02:59:00 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:32:57.883 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.112 Initializing NVMe Controllers 00:33:10.112 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:10.112 Controller IO queue size 128, less than required. 00:33:10.112 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:10.112 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:10.112 Initialization complete. Launching workers. 00:33:10.112 ======================================================== 00:33:10.112 Latency(us) 00:33:10.112 Device Information : IOPS MiB/s Average min max 00:33:10.112 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12547.64 6.13 10201.50 2699.56 22058.87 00:33:10.112 ======================================================== 00:33:10.112 Total : 12547.64 6.13 10201.50 2699.56 22058.87 00:33:10.112 00:33:10.112 02:59:11 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:10.112 02:59:11 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:10.112 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.093 Initializing NVMe Controllers 00:33:20.093 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:20.093 Controller IO queue size 128, less than required. 00:33:20.093 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:20.093 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:20.093 Initialization complete. Launching workers. 00:33:20.093 ======================================================== 00:33:20.093 Latency(us) 00:33:20.093 Device Information : IOPS MiB/s Average min max 00:33:20.093 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10510.90 1313.86 12181.14 3486.83 26528.82 00:33:20.093 ======================================================== 00:33:20.093 Total : 10510.90 1313.86 12181.14 3486.83 26528.82 00:33:20.093 00:33:20.093 02:59:22 nvmf_rdma.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:20.093 02:59:23 nvmf_rdma.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7b1d0fde-98de-4f0c-bc6f-e0cf840e3cb9 00:33:21.994 02:59:24 nvmf_rdma.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:21.994 02:59:25 nvmf_rdma.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eb7038a0-eeaf-43d1-aeb4-bcc00cac821a 00:33:22.252 02:59:25 nvmf_rdma.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:22.511 rmmod nvme_rdma 00:33:22.511 rmmod nvme_fabrics 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 952829 ']' 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 952829 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@947 -- # '[' -z 952829 ']' 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@951 -- # kill -0 952829 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # uname 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 952829 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 952829' 00:33:22.511 killing process with pid 952829 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@966 -- # kill 952829 00:33:22.511 [2024-05-15 02:59:25.720042] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:22.511 02:59:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@971 -- # wait 952829 00:33:22.511 [2024-05-15 02:59:25.792029] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:33:25.047 02:59:27 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:25.047 02:59:27 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:25.047 00:33:25.047 real 1m45.646s 00:33:25.047 user 6m39.965s 00:33:25.047 sys 0m7.510s 00:33:25.047 02:59:27 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:25.047 02:59:27 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:25.047 ************************************ 00:33:25.047 END TEST nvmf_perf 00:33:25.047 ************************************ 00:33:25.047 02:59:27 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:33:25.047 02:59:27 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:33:25.047 02:59:27 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:33:25.047 02:59:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:25.047 ************************************ 00:33:25.047 START TEST nvmf_fio_host 00:33:25.047 ************************************ 00:33:25.047 02:59:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:33:25.047 * Looking for test storage... 00:33:25.047 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:25.047 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:25.048 02:59:28 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.617 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.617 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:31.617 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:31.617 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:33:31.618 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:33:31.618 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:33:31.618 Found net devices under 0000:18:00.0: mlx_0_0 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:33:31.618 Found net devices under 0000:18:00.1: mlx_0_1 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:31.618 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:31.618 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:31.618 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:33:31.618 altname enp24s0f0np0 00:33:31.618 altname ens785f0np0 00:33:31.619 inet 192.168.100.8/24 scope global mlx_0_0 00:33:31.619 valid_lft forever preferred_lft forever 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:31.619 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:31.619 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:33:31.619 altname enp24s0f1np1 00:33:31.619 altname ens785f1np1 00:33:31.619 inet 192.168.100.9/24 scope global mlx_0_1 00:33:31.619 valid_lft forever preferred_lft forever 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:31.619 192.168.100.9' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:31.619 192.168.100.9' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:31.619 192.168.100.9' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=968261 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 968261 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@828 -- # '[' -z 968261 ']' 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.619 [2024-05-15 02:59:34.517099] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:33:31.619 [2024-05-15 02:59:34.517174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.619 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.619 [2024-05-15 02:59:34.627057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:31.619 [2024-05-15 02:59:34.679217] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.619 [2024-05-15 02:59:34.679273] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.619 [2024-05-15 02:59:34.679288] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.619 [2024-05-15 02:59:34.679301] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.619 [2024-05-15 02:59:34.679312] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.619 [2024-05-15 02:59:34.679379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.619 [2024-05-15 02:59:34.679401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:31.619 [2024-05-15 02:59:34.679507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.619 [2024-05-15 02:59:34.679507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@861 -- # return 0 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:31.619 02:59:34 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.619 [2024-05-15 02:59:34.829103] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d3dd70/0x1d42260) succeed. 00:33:31.619 [2024-05-15 02:59:34.844070] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d3f3b0/0x1d838f0) succeed. 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.879 Malloc1 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.879 [2024-05-15 02:59:35.104880] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:31.879 [2024-05-15 02:59:35.105304] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:33:31.879 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:33:32.137 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:33:32.137 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:33:32.137 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:32.137 02:59:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:32.396 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:32.396 fio-3.35 00:33:32.396 Starting 1 thread 00:33:32.396 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.931 00:33:34.931 test: (groupid=0, jobs=1): err= 0: pid=968552: Wed May 15 02:59:37 2024 00:33:34.931 read: IOPS=12.3k, BW=48.2MiB/s (50.5MB/s)(96.6MiB/2005msec) 00:33:34.931 slat (nsec): min=2218, max=32925, avg=2300.57, stdev=525.53 00:33:34.931 clat (usec): min=2187, max=9184, avg=5168.42, stdev=136.29 00:33:34.931 lat (usec): min=2209, max=9186, avg=5170.72, stdev=136.15 00:33:34.931 clat percentiles (usec): 00:33:34.931 | 1.00th=[ 5145], 5.00th=[ 5145], 10.00th=[ 5145], 20.00th=[ 5145], 00:33:34.931 | 30.00th=[ 5145], 40.00th=[ 5145], 50.00th=[ 5145], 60.00th=[ 5145], 00:33:34.931 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5211], 95.00th=[ 5211], 00:33:34.931 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 7701], 99.95th=[ 8979], 00:33:34.931 | 99.99th=[ 9110] 00:33:34.931 bw ( KiB/s): min=48240, max=49976, per=100.00%, avg=49320.00, stdev=759.48, samples=4 00:33:34.931 iops : min=12060, max=12494, avg=12330.00, stdev=189.87, samples=4 00:33:34.931 write: IOPS=12.3k, BW=48.0MiB/s (50.4MB/s)(96.3MiB/2005msec); 0 zone resets 00:33:34.931 slat (nsec): min=2296, max=15205, avg=2692.61, stdev=506.33 00:33:34.931 clat (usec): min=3324, max=9174, avg=5166.19, stdev=142.21 00:33:34.931 lat (usec): min=3336, max=9177, avg=5168.88, stdev=142.08 00:33:34.931 clat percentiles (usec): 00:33:34.931 | 1.00th=[ 5145], 5.00th=[ 5145], 10.00th=[ 5145], 20.00th=[ 5145], 00:33:34.931 | 30.00th=[ 5145], 40.00th=[ 5145], 50.00th=[ 5145], 60.00th=[ 5145], 00:33:34.931 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5211], 95.00th=[ 5211], 00:33:34.931 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 7767], 99.95th=[ 8979], 00:33:34.931 | 99.99th=[ 9110] 00:33:34.931 bw ( KiB/s): min=48824, max=49752, per=99.98%, avg=49182.00, stdev=398.15, samples=4 00:33:34.931 iops : min=12206, max=12438, avg=12295.50, stdev=99.54, samples=4 00:33:34.931 lat (msec) : 4=0.09%, 10=99.91% 00:33:34.931 cpu : usr=99.50%, sys=0.05%, ctx=16, majf=0, minf=2 00:33:34.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:34.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:34.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:34.931 issued rwts: total=24721,24657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:34.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:34.931 00:33:34.931 Run status group 0 (all jobs): 00:33:34.931 READ: bw=48.2MiB/s (50.5MB/s), 48.2MiB/s-48.2MiB/s (50.5MB/s-50.5MB/s), io=96.6MiB (101MB), run=2005-2005msec 00:33:34.931 WRITE: bw=48.0MiB/s (50.4MB/s), 48.0MiB/s-48.0MiB/s (50.4MB/s-50.4MB/s), io=96.3MiB (101MB), run=2005-2005msec 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:34.931 02:59:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:33:34.931 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:34.931 fio-3.35 00:33:34.931 Starting 1 thread 00:33:34.931 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.463 00:33:37.463 test: (groupid=0, jobs=1): err= 0: pid=969005: Wed May 15 02:59:40 2024 00:33:37.463 read: IOPS=10.8k, BW=168MiB/s (177MB/s)(336MiB/1994msec) 00:33:37.463 slat (nsec): min=3702, max=42909, avg=4004.84, stdev=1272.27 00:33:37.463 clat (usec): min=412, max=11082, avg=2822.84, stdev=1768.65 00:33:37.463 lat (usec): min=415, max=11086, avg=2826.85, stdev=1769.05 00:33:37.463 clat percentiles (usec): 00:33:37.463 | 1.00th=[ 807], 5.00th=[ 1123], 10.00th=[ 1319], 20.00th=[ 1582], 00:33:37.463 | 30.00th=[ 1811], 40.00th=[ 2040], 50.00th=[ 2311], 60.00th=[ 2606], 00:33:37.463 | 70.00th=[ 2966], 80.00th=[ 3523], 90.00th=[ 5342], 95.00th=[ 7177], 00:33:37.463 | 99.00th=[ 8979], 99.50th=[ 9896], 99.90th=[10683], 99.95th=[10814], 00:33:37.463 | 99.99th=[11076] 00:33:37.463 bw ( KiB/s): min=67040, max=96000, per=49.46%, avg=85304.00, stdev=13399.26, samples=4 00:33:37.463 iops : min= 4190, max= 6000, avg=5331.50, stdev=837.45, samples=4 00:33:37.463 write: IOPS=6072, BW=94.9MiB/s (99.5MB/s)(174MiB/1838msec); 0 zone resets 00:33:37.463 slat (usec): min=43, max=131, avg=45.48, stdev= 5.80 00:33:37.463 clat (usec): min=3399, max=30919, avg=15955.30, stdev=4397.30 00:33:37.463 lat (usec): min=3443, max=30964, avg=16000.78, stdev=4397.30 00:33:37.463 clat percentiles (usec): 00:33:37.463 | 1.00th=[ 7439], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11863], 00:33:37.463 | 30.00th=[12649], 40.00th=[13829], 50.00th=[15270], 60.00th=[17433], 00:33:37.463 | 70.00th=[19006], 80.00th=[20055], 90.00th=[21627], 95.00th=[23200], 00:33:37.463 | 99.00th=[25822], 99.50th=[26608], 99.90th=[29230], 99.95th=[30278], 00:33:37.463 | 99.99th=[30802] 00:33:37.463 bw ( KiB/s): min=69952, max=100224, per=91.05%, avg=88472.00, stdev=13575.58, samples=4 00:33:37.464 iops : min= 4372, max= 6264, avg=5529.50, stdev=848.47, samples=4 00:33:37.464 lat (usec) : 500=0.01%, 750=0.46%, 1000=1.39% 00:33:37.464 lat (msec) : 2=23.43%, 4=30.41%, 10=11.70%, 20=25.46%, 50=7.13% 00:33:37.464 cpu : usr=97.46%, sys=1.10%, ctx=141, majf=0, minf=1 00:33:37.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:33:37.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:37.464 issued rwts: total=21495,11162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:37.464 00:33:37.464 Run status group 0 (all jobs): 00:33:37.464 READ: bw=168MiB/s (177MB/s), 168MiB/s-168MiB/s (177MB/s-177MB/s), io=336MiB (352MB), run=1994-1994msec 00:33:37.464 WRITE: bw=94.9MiB/s (99.5MB/s), 94.9MiB/s-94.9MiB/s (99.5MB/s-99.5MB/s), io=174MiB (183MB), run=1838-1838msec 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=() 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1510 -- # local bdfs 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:5e:00.0 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 192.168.100.8 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:37.464 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:37.723 Nvme0n1 00:33:37.723 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:37.723 02:59:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:37.723 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:37.723 02:59:40 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=765d6d9a-be27-4040-84ae-beb72b3c806f 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb 765d6d9a-be27-4040-84ae-beb72b3c806f 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_uuid=765d6d9a-be27-4040-84ae-beb72b3c806f 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_info 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local fc 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local cs 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # rpc_cmd bdev_lvol_get_lvstores 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:33:38.292 { 00:33:38.292 "uuid": "765d6d9a-be27-4040-84ae-beb72b3c806f", 00:33:38.292 "name": "lvs_0", 00:33:38.292 "base_bdev": "Nvme0n1", 00:33:38.292 "total_data_clusters": 1787, 00:33:38.292 "free_clusters": 1787, 00:33:38.292 "block_size": 512, 00:33:38.292 "cluster_size": 1073741824 00:33:38.292 } 00:33:38.292 ]' 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="765d6d9a-be27-4040-84ae-beb72b3c806f") .free_clusters' 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # fc=1787 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="765d6d9a-be27-4040-84ae-beb72b3c806f") .cluster_size' 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1367 -- # cs=1073741824 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1370 -- # free_mb=1829888 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1371 -- # echo 1829888 00:33:38.292 1829888 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 1829888 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.292 b2903c33-602b-4119-82b2-93da4f210d40 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.292 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:38.293 02:59:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:38.293 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:38.293 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:33:38.293 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:38.293 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:33:38.293 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:38.293 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:33:38.293 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:33:38.293 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:33:38.293 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:38.293 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:33:38.293 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:33:38.576 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:33:38.576 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:33:38.576 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:33:38.576 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:38.576 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:33:38.576 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:33:38.576 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:33:38.576 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:33:38.576 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:38.576 02:59:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:38.836 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:38.836 fio-3.35 00:33:38.836 Starting 1 thread 00:33:38.836 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.369 00:33:41.369 test: (groupid=0, jobs=1): err= 0: pid=969499: Wed May 15 02:59:44 2024 00:33:41.369 read: IOPS=7967, BW=31.1MiB/s (32.6MB/s)(62.4MiB/2006msec) 00:33:41.369 slat (nsec): min=2231, max=19895, avg=2327.41, stdev=298.17 00:33:41.369 clat (usec): min=3191, max=13202, avg=7970.55, stdev=286.23 00:33:41.369 lat (usec): min=3202, max=13204, avg=7972.88, stdev=286.19 00:33:41.369 clat percentiles (usec): 00:33:41.369 | 1.00th=[ 6718], 5.00th=[ 7898], 10.00th=[ 7898], 20.00th=[ 7963], 00:33:41.369 | 30.00th=[ 7963], 40.00th=[ 7963], 50.00th=[ 7963], 60.00th=[ 7963], 00:33:41.369 | 70.00th=[ 7963], 80.00th=[ 8029], 90.00th=[ 8029], 95.00th=[ 8029], 00:33:41.369 | 99.00th=[ 8848], 99.50th=[ 9634], 99.90th=[11076], 99.95th=[13042], 00:33:41.369 | 99.99th=[13173] 00:33:41.369 bw ( KiB/s): min=30256, max=32496, per=99.89%, avg=31836.00, stdev=1060.45, samples=4 00:33:41.369 iops : min= 7564, max= 8124, avg=7959.00, stdev=265.11, samples=4 00:33:41.369 write: IOPS=7938, BW=31.0MiB/s (32.5MB/s)(62.2MiB/2006msec); 0 zone resets 00:33:41.369 slat (nsec): min=2309, max=13175, avg=2727.71, stdev=349.85 00:33:41.369 clat (usec): min=5035, max=13192, avg=7959.18, stdev=315.17 00:33:41.369 lat (usec): min=5040, max=13195, avg=7961.91, stdev=315.14 00:33:41.369 clat percentiles (usec): 00:33:41.369 | 1.00th=[ 6652], 5.00th=[ 7898], 10.00th=[ 7898], 20.00th=[ 7898], 00:33:41.369 | 30.00th=[ 7963], 40.00th=[ 7963], 50.00th=[ 7963], 60.00th=[ 7963], 00:33:41.369 | 70.00th=[ 7963], 80.00th=[ 7963], 90.00th=[ 8029], 95.00th=[ 8029], 00:33:41.369 | 99.00th=[ 9110], 99.50th=[ 9634], 99.90th=[12911], 99.95th=[13042], 00:33:41.369 | 99.99th=[13173] 00:33:41.369 bw ( KiB/s): min=31208, max=31968, per=99.91%, avg=31724.00, stdev=350.33, samples=4 00:33:41.369 iops : min= 7802, max= 7992, avg=7931.00, stdev=87.58, samples=4 00:33:41.369 lat (msec) : 4=0.01%, 10=99.80%, 20=0.19% 00:33:41.369 cpu : usr=99.45%, sys=0.15%, ctx=16, majf=0, minf=2 00:33:41.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:41.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:41.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:41.369 issued rwts: total=15983,15924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:41.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:41.369 00:33:41.369 Run status group 0 (all jobs): 00:33:41.369 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=62.4MiB (65.5MB), run=2006-2006msec 00:33:41.369 WRITE: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=62.2MiB (65.2MB), run=2006-2006msec 00:33:41.369 02:59:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:41.369 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.369 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.369 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.369 02:59:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:41.369 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.369 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.937 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.937 02:59:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=f62280db-ef0d-4e06-9789-d6860e969c4c 00:33:41.937 02:59:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb f62280db-ef0d-4e06-9789-d6860e969c4c 00:33:41.937 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_uuid=f62280db-ef0d-4e06-9789-d6860e969c4c 00:33:41.937 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_info 00:33:41.937 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local fc 00:33:41.937 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local cs 00:33:41.937 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # rpc_cmd bdev_lvol_get_lvstores 00:33:41.937 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.937 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:41.937 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.937 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:33:41.937 { 00:33:41.937 "uuid": "765d6d9a-be27-4040-84ae-beb72b3c806f", 00:33:41.937 "name": "lvs_0", 00:33:41.937 "base_bdev": "Nvme0n1", 00:33:41.938 "total_data_clusters": 1787, 00:33:41.938 "free_clusters": 0, 00:33:41.938 "block_size": 512, 00:33:41.938 "cluster_size": 1073741824 00:33:41.938 }, 00:33:41.938 { 00:33:41.938 "uuid": "f62280db-ef0d-4e06-9789-d6860e969c4c", 00:33:41.938 "name": "lvs_n_0", 00:33:41.938 "base_bdev": "b2903c33-602b-4119-82b2-93da4f210d40", 00:33:41.938 "total_data_clusters": 457025, 00:33:41.938 "free_clusters": 457025, 00:33:41.938 "block_size": 512, 00:33:41.938 "cluster_size": 4194304 00:33:41.938 } 00:33:41.938 ]' 00:33:41.938 02:59:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="f62280db-ef0d-4e06-9789-d6860e969c4c") .free_clusters' 00:33:41.938 02:59:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # fc=457025 00:33:41.938 02:59:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="f62280db-ef0d-4e06-9789-d6860e969c4c") .cluster_size' 00:33:41.938 02:59:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1367 -- # cs=4194304 00:33:41.938 02:59:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1370 -- # free_mb=1828100 00:33:41.938 02:59:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1371 -- # echo 1828100 00:33:41.938 1828100 00:33:41.938 02:59:45 nvmf_rdma.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:33:41.938 02:59:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.938 02:59:45 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.877 911ae9fb-9b71-4420-9485-8087b162a239 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:33:42.877 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:33:43.135 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:33:43.135 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:33:43.135 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:43.135 02:59:46 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:33:43.394 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:43.394 fio-3.35 00:33:43.394 Starting 1 thread 00:33:43.394 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.928 00:33:45.928 test: (groupid=0, jobs=1): err= 0: pid=970251: Wed May 15 02:59:48 2024 00:33:45.928 read: IOPS=9731, BW=38.0MiB/s (39.9MB/s)(76.2MiB/2005msec) 00:33:45.928 slat (nsec): min=2228, max=25858, avg=2301.94, stdev=371.00 00:33:45.928 clat (usec): min=3465, max=11338, avg=6513.29, stdev=238.34 00:33:45.928 lat (usec): min=3486, max=11340, avg=6515.59, stdev=238.30 00:33:45.928 clat percentiles (usec): 00:33:45.928 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 6456], 20.00th=[ 6456], 00:33:45.929 | 30.00th=[ 6456], 40.00th=[ 6521], 50.00th=[ 6521], 60.00th=[ 6521], 00:33:45.929 | 70.00th=[ 6521], 80.00th=[ 6521], 90.00th=[ 6587], 95.00th=[ 6587], 00:33:45.929 | 99.00th=[ 7177], 99.50th=[ 7242], 99.90th=[ 9634], 99.95th=[10421], 00:33:45.929 | 99.99th=[11338] 00:33:45.929 bw ( KiB/s): min=37376, max=39760, per=99.94%, avg=38900.00, stdev=1063.06, samples=4 00:33:45.929 iops : min= 9344, max= 9940, avg=9725.00, stdev=265.76, samples=4 00:33:45.929 write: IOPS=9740, BW=38.0MiB/s (39.9MB/s)(76.3MiB/2005msec); 0 zone resets 00:33:45.929 slat (nsec): min=2302, max=14541, avg=2712.18, stdev=409.09 00:33:45.929 clat (usec): min=3459, max=11357, avg=6499.19, stdev=238.88 00:33:45.929 lat (usec): min=3466, max=11360, avg=6501.90, stdev=238.86 00:33:45.929 clat percentiles (usec): 00:33:45.929 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 6456], 20.00th=[ 6456], 00:33:45.929 | 30.00th=[ 6456], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6521], 00:33:45.929 | 70.00th=[ 6521], 80.00th=[ 6521], 90.00th=[ 6521], 95.00th=[ 6587], 00:33:45.929 | 99.00th=[ 7177], 99.50th=[ 7242], 99.90th=[ 9634], 99.95th=[10421], 00:33:45.929 | 99.99th=[11338] 00:33:45.929 bw ( KiB/s): min=38016, max=39568, per=99.91%, avg=38926.00, stdev=658.15, samples=4 00:33:45.929 iops : min= 9504, max= 9892, avg=9731.50, stdev=164.54, samples=4 00:33:45.929 lat (msec) : 4=0.04%, 10=99.88%, 20=0.08% 00:33:45.929 cpu : usr=99.55%, sys=0.05%, ctx=16, majf=0, minf=2 00:33:45.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:45.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:45.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:45.929 issued rwts: total=19511,19529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:45.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:45.929 00:33:45.929 Run status group 0 (all jobs): 00:33:45.929 READ: bw=38.0MiB/s (39.9MB/s), 38.0MiB/s-38.0MiB/s (39.9MB/s-39.9MB/s), io=76.2MiB (79.9MB), run=2005-2005msec 00:33:45.929 WRITE: bw=38.0MiB/s (39.9MB/s), 38.0MiB/s-38.0MiB/s (39.9MB/s-39.9MB/s), io=76.3MiB (80.0MB), run=2005-2005msec 00:33:45.929 02:59:48 nvmf_rdma.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:45.929 02:59:48 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:45.929 02:59:48 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.929 02:59:48 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:45.929 02:59:48 nvmf_rdma.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:33:45.929 02:59:48 nvmf_rdma.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:45.929 02:59:48 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:45.929 02:59:48 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.833 02:59:50 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:47.833 02:59:50 nvmf_rdma.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:33:47.833 02:59:50 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:47.833 02:59:50 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.833 02:59:50 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:47.833 02:59:50 nvmf_rdma.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:33:47.833 02:59:50 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:47.833 02:59:50 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.400 02:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:48.400 02:59:51 nvmf_rdma.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:33:48.400 02:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:48.400 02:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.400 02:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:48.400 02:59:51 nvmf_rdma.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:33:48.400 02:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:48.400 02:59:51 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:50.307 rmmod nvme_rdma 00:33:50.307 rmmod nvme_fabrics 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 968261 ']' 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 968261 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' -z 968261 ']' 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@951 -- # kill -0 968261 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # uname 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 968261 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 968261' 00:33:50.307 killing process with pid 968261 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@966 -- # kill 968261 00:33:50.307 [2024-05-15 02:59:53.356494] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:50.307 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@971 -- # wait 968261 00:33:50.307 [2024-05-15 02:59:53.464956] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:33:50.567 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:50.567 02:59:53 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:50.567 00:33:50.567 real 0m25.770s 00:33:50.567 user 1m37.494s 00:33:50.567 sys 0m6.515s 00:33:50.567 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:50.567 02:59:53 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.567 ************************************ 00:33:50.567 END TEST nvmf_fio_host 00:33:50.567 ************************************ 00:33:50.567 02:59:53 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:33:50.567 02:59:53 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:33:50.567 02:59:53 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:33:50.567 02:59:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:50.567 ************************************ 00:33:50.567 START TEST nvmf_failover 00:33:50.567 ************************************ 00:33:50.567 02:59:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:33:50.825 * Looking for test storage... 00:33:50.825 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:50.825 02:59:53 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.825 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:50.825 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.825 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:33:50.826 02:59:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:57.390 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:57.390 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:33:57.390 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:57.390 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:57.390 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:57.390 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:57.390 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:57.390 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:33:57.390 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:33:57.391 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:33:57.391 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:33:57.391 Found net devices under 0000:18:00.0: mlx_0_0 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:33:57.391 Found net devices under 0000:18:00.1: mlx_0_1 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:57.391 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:57.391 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:33:57.391 altname enp24s0f0np0 00:33:57.391 altname ens785f0np0 00:33:57.391 inet 192.168.100.8/24 scope global mlx_0_0 00:33:57.391 valid_lft forever preferred_lft forever 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:57.391 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:57.392 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:57.392 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:33:57.392 altname enp24s0f1np1 00:33:57.392 altname ens785f1np1 00:33:57.392 inet 192.168.100.9/24 scope global mlx_0_1 00:33:57.392 valid_lft forever preferred_lft forever 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:57.392 192.168.100.9' 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:57.392 192.168.100.9' 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:57.392 192.168.100.9' 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=973944 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 973944 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 973944 ']' 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:57.392 [2024-05-15 03:00:00.362647] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:33:57.392 [2024-05-15 03:00:00.362719] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:57.392 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.392 [2024-05-15 03:00:00.463983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:57.392 [2024-05-15 03:00:00.510233] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:57.392 [2024-05-15 03:00:00.510290] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:57.392 [2024-05-15 03:00:00.510306] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:57.392 [2024-05-15 03:00:00.510319] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:57.392 [2024-05-15 03:00:00.510330] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:57.392 [2024-05-15 03:00:00.510443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:57.392 [2024-05-15 03:00:00.510544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:57.392 [2024-05-15 03:00:00.510545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:57.392 03:00:00 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:57.651 [2024-05-15 03:00:00.910476] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15c1560/0x15c5a50) succeed. 00:33:57.651 [2024-05-15 03:00:00.925301] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15c2b00/0x16070e0) succeed. 00:33:57.909 03:00:01 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:58.168 Malloc0 00:33:58.168 03:00:01 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:58.427 03:00:01 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:58.684 03:00:01 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:58.941 [2024-05-15 03:00:02.009600] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:58.941 [2024-05-15 03:00:02.009956] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:58.941 03:00:02 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:33:59.198 [2024-05-15 03:00:02.262647] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:33:59.198 03:00:02 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:33:59.459 [2024-05-15 03:00:02.507571] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:33:59.459 03:00:02 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=974427 00:33:59.459 03:00:02 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:59.459 03:00:02 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:59.459 03:00:02 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 974427 /var/tmp/bdevperf.sock 00:33:59.459 03:00:02 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 974427 ']' 00:33:59.459 03:00:02 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:59.459 03:00:02 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:59.459 03:00:02 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:59.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:59.459 03:00:02 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:59.459 03:00:02 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:00.395 03:00:03 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:00.395 03:00:03 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:34:00.395 03:00:03 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:00.395 NVMe0n1 00:34:00.653 03:00:03 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:00.911 00:34:00.911 03:00:04 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=974612 00:34:00.911 03:00:04 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:00.911 03:00:04 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:01.847 03:00:05 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:02.106 03:00:05 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:05.446 03:00:08 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:05.446 00:34:05.446 03:00:08 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:05.705 03:00:08 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:08.992 03:00:11 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:08.992 [2024-05-15 03:00:12.030866] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:08.992 03:00:12 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:09.929 03:00:13 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:34:10.188 03:00:13 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 974612 00:34:16.763 0 00:34:16.763 03:00:19 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 974427 00:34:16.763 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 974427 ']' 00:34:16.763 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 974427 00:34:16.763 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:34:16.763 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:34:16.763 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 974427 00:34:16.763 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:34:16.763 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:34:16.764 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 974427' 00:34:16.764 killing process with pid 974427 00:34:16.764 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # kill 974427 00:34:16.764 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@971 -- # wait 974427 00:34:16.764 03:00:19 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:16.764 [2024-05-15 03:00:02.580135] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:34:16.764 [2024-05-15 03:00:02.580205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974427 ] 00:34:16.764 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.764 [2024-05-15 03:00:02.674844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.764 [2024-05-15 03:00:02.722066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.764 Running I/O for 15 seconds... 00:34:16.764 [2024-05-15 03:00:06.256564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.764 [2024-05-15 03:00:06.256610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.256628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.764 [2024-05-15 03:00:06.256643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.256658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.764 [2024-05-15 03:00:06.256671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.256687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.764 [2024-05-15 03:00:06.256702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.258556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:16.764 [2024-05-15 03:00:06.258578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.764 [2024-05-15 03:00:06.258602] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:34:16.764 [2024-05-15 03:00:06.258617] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:34:16.764 [2024-05-15 03:00:06.258640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.258655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.258723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.258740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.258788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.258804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.258851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.258867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.258922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.258938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.258990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.259938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.259955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.260001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.260016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.260062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.260078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.260124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.260140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.260186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.260202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.260249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.260264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.260311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.260327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.260373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.260389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.260436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.260452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.764 [2024-05-15 03:00:06.260500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.764 [2024-05-15 03:00:06.260518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.260566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.260581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.260628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.260644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.260690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.260706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.260753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.260769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.260815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.260832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.260879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.260901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.260949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.260965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.261951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.261966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.765 [2024-05-15 03:00:06.262883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.765 [2024-05-15 03:00:06.262902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.262949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.262965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.263948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.263963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.264943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.264958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.265004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.265019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.265066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.265081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.265127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.265147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.265194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.265209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.265255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.265271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.265317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.766 [2024-05-15 03:00:06.265333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.766 [2024-05-15 03:00:06.265379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.767 [2024-05-15 03:00:06.265394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.265441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.767 [2024-05-15 03:00:06.265456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.265503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.767 [2024-05-15 03:00:06.265518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.265565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.767 [2024-05-15 03:00:06.265580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.265626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.767 [2024-05-15 03:00:06.265641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.265691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.767 [2024-05-15 03:00:06.265706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.265752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.767 [2024-05-15 03:00:06.265768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.265814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.767 [2024-05-15 03:00:06.265829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.265875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.767 [2024-05-15 03:00:06.265890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.265945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.767 [2024-05-15 03:00:06.265960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.266007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:06.266024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.266072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:06.266088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.266135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:06.266150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.266198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:06.266214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.266262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:06.266279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.266327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:06.266344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.266392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:06.266408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.266457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:06.266473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.266520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:06.266536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.266583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:06.266598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:48 cdw0:87d15790 sqhd:0030 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.287537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:16.767 [2024-05-15 03:00:06.287563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:16.767 [2024-05-15 03:00:06.287577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110672 len:8 PRP1 0x0 PRP2 0x0 00:34:16.767 [2024-05-15 03:00:06.287592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:06.287682] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:34:16.767 [2024-05-15 03:00:06.287698] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:16.767 [2024-05-15 03:00:06.287737] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:16.767 [2024-05-15 03:00:06.291891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.767 [2024-05-15 03:00:06.352353] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:16.767 [2024-05-15 03:00:09.782232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187000 00:34:16.767 [2024-05-15 03:00:09.782709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.767 [2024-05-15 03:00:09.782726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.782741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.782757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.782770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.782787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.782801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.782818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.782832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.782849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.782863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.782878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.782899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.782916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.782929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.782946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.782959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.782975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.782989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.783018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.783050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.768 [2024-05-15 03:00:09.783079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.768 [2024-05-15 03:00:09.783110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.768 [2024-05-15 03:00:09.783140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.768 [2024-05-15 03:00:09.783171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.768 [2024-05-15 03:00:09.783202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.768 [2024-05-15 03:00:09.783234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.768 [2024-05-15 03:00:09.783266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.768 [2024-05-15 03:00:09.783296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.783327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.783358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.783387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.783418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.783448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.783479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.783509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187000 00:34:16.768 [2024-05-15 03:00:09.783539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.768 [2024-05-15 03:00:09.783569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.768 [2024-05-15 03:00:09.783601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.768 [2024-05-15 03:00:09.783631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.768 [2024-05-15 03:00:09.783647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.768 [2024-05-15 03:00:09.783661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.783677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.783691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.783707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.783721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.783737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.783751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.783768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.783782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.783798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.783812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.783829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.783843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.783859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.783872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.783889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.769 [2024-05-15 03:00:09.784358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.769 [2024-05-15 03:00:09.784388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.769 [2024-05-15 03:00:09.784417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.769 [2024-05-15 03:00:09.784449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.769 [2024-05-15 03:00:09.784480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.769 [2024-05-15 03:00:09.784510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.769 [2024-05-15 03:00:09.784540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.769 [2024-05-15 03:00:09.784569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.769 [2024-05-15 03:00:09.784599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.769 [2024-05-15 03:00:09.784628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.769 [2024-05-15 03:00:09.784658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.769 [2024-05-15 03:00:09.784687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187000 00:34:16.769 [2024-05-15 03:00:09.784967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.769 [2024-05-15 03:00:09.784983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.784997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187000 00:34:16.770 [2024-05-15 03:00:09.785669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.785699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.785729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.785759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.785789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.785821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.785851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.785887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.785921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.785951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.785981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.785996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.786010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.786026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.786040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.786055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.786070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.786085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.786099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.786115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.786129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.770 [2024-05-15 03:00:09.786145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.770 [2024-05-15 03:00:09.786158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:09.786174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:09.786189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:09.786207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:09.786221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:09.786237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:09.786253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:09.786268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:09.786282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:09.786298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:09.786313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:09.786329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:09.786342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:09.788239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:16.771 [2024-05-15 03:00:09.788258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:16.771 [2024-05-15 03:00:09.788272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102880 len:8 PRP1 0x0 PRP2 0x0 00:34:16.771 [2024-05-15 03:00:09.788286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:09.788337] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:34:16.771 [2024-05-15 03:00:09.788354] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:34:16.771 [2024-05-15 03:00:09.788369] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.771 [2024-05-15 03:00:09.792547] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.771 [2024-05-15 03:00:09.813214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:16.771 [2024-05-15 03:00:09.874446] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:16.771 [2024-05-15 03:00:14.302025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:14.302071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:14.302113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:14.302144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:14.302174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:14.302465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:14.302496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:14.302526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:14.302556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:14.302588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:14.302619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:14.302648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.771 [2024-05-15 03:00:14.302678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187000 00:34:16.771 [2024-05-15 03:00:14.302926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.771 [2024-05-15 03:00:14.302944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.302959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.302974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.302988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:122960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187000 00:34:16.772 [2024-05-15 03:00:14.303653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.772 [2024-05-15 03:00:14.303910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.772 [2024-05-15 03:00:14.303924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.303940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.303954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.303970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.303983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.304167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.304197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.773 [2024-05-15 03:00:14.304674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.304704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.304737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.304767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.304798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.304828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.304858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.304888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.304924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.304954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.304971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.304984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.305000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.305015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.305031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.305045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.305061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187000 00:34:16.773 [2024-05-15 03:00:14.305077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.773 [2024-05-15 03:00:14.305093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.774 [2024-05-15 03:00:14.305447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.774 [2024-05-15 03:00:14.305477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.774 [2024-05-15 03:00:14.305507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.774 [2024-05-15 03:00:14.305536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.774 [2024-05-15 03:00:14.305566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.774 [2024-05-15 03:00:14.305596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.774 [2024-05-15 03:00:14.305625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.774 [2024-05-15 03:00:14.305655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.305930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187000 00:34:16.774 [2024-05-15 03:00:14.305944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:5980 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.307926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:16.774 [2024-05-15 03:00:14.307946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:16.774 [2024-05-15 03:00:14.307959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123320 len:8 PRP1 0x0 PRP2 0x0 00:34:16.774 [2024-05-15 03:00:14.307973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.774 [2024-05-15 03:00:14.308025] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:34:16.774 [2024-05-15 03:00:14.308042] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:34:16.774 [2024-05-15 03:00:14.308056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.774 [2024-05-15 03:00:14.312222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.774 [2024-05-15 03:00:14.332326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:16.774 [2024-05-15 03:00:14.389061] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:16.774 00:34:16.774 Latency(us) 00:34:16.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.774 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:16.774 Verification LBA range: start 0x0 length 0x4000 00:34:16.774 NVMe0n1 : 15.01 10305.04 40.25 268.40 0.00 12069.07 407.82 1043105.17 00:34:16.774 =================================================================================================================== 00:34:16.774 Total : 10305.04 40.25 268.40 0.00 12069.07 407.82 1043105.17 00:34:16.774 Received shutdown signal, test time was about 15.000000 seconds 00:34:16.774 00:34:16.774 Latency(us) 00:34:16.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.774 =================================================================================================================== 00:34:16.774 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:16.774 03:00:19 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:16.774 03:00:19 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:16.774 03:00:19 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:16.774 03:00:19 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=976975 00:34:16.774 03:00:19 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:16.774 03:00:19 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 976975 /var/tmp/bdevperf.sock 00:34:16.775 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 976975 ']' 00:34:16.775 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:16.775 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:34:16.775 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:16.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:16.775 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:34:16.775 03:00:19 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:17.341 03:00:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:17.341 03:00:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:34:17.341 03:00:20 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:17.341 [2024-05-15 03:00:20.626627] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:34:17.600 03:00:20 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:34:17.600 [2024-05-15 03:00:20.883606] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:34:17.859 03:00:20 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:18.118 NVMe0n1 00:34:18.118 03:00:21 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:18.377 00:34:18.377 03:00:21 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:18.636 00:34:18.636 03:00:21 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:18.636 03:00:21 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:18.636 03:00:21 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:18.895 03:00:22 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:22.185 03:00:25 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:22.185 03:00:25 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:22.185 03:00:25 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:22.185 03:00:25 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=977716 00:34:22.185 03:00:25 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 977716 00:34:23.121 0 00:34:23.381 03:00:26 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:23.381 [2024-05-15 03:00:19.513553] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:34:23.381 [2024-05-15 03:00:19.513636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid976975 ] 00:34:23.381 EAL: No free 2048 kB hugepages reported on node 1 00:34:23.381 [2024-05-15 03:00:19.624220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:23.381 [2024-05-15 03:00:19.669575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.381 [2024-05-15 03:00:22.079076] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:34:23.381 [2024-05-15 03:00:22.079699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:23.381 [2024-05-15 03:00:22.079740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:23.381 [2024-05-15 03:00:22.109711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:23.381 [2024-05-15 03:00:22.134099] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:23.381 Running I/O for 1 seconds... 00:34:23.381 00:34:23.381 Latency(us) 00:34:23.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.381 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:23.381 Verification LBA range: start 0x0 length 0x4000 00:34:23.381 NVMe0n1 : 1.01 14473.99 56.54 0.00 0.00 8781.40 1980.33 14816.83 00:34:23.381 =================================================================================================================== 00:34:23.381 Total : 14473.99 56.54 0.00 0.00 8781.40 1980.33 14816.83 00:34:23.382 03:00:26 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:23.382 03:00:26 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:23.382 03:00:26 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:23.950 03:00:26 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:23.950 03:00:26 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:23.950 03:00:27 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:24.210 03:00:27 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 976975 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 976975 ']' 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 976975 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 976975 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 976975' 00:34:27.502 killing process with pid 976975 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # kill 976975 00:34:27.502 03:00:30 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@971 -- # wait 976975 00:34:27.760 03:00:30 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:27.761 03:00:30 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:34:28.020 rmmod nvme_rdma 00:34:28.020 rmmod nvme_fabrics 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 973944 ']' 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 973944 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 973944 ']' 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 973944 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 973944 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 973944' 00:34:28.020 killing process with pid 973944 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # kill 973944 00:34:28.020 [2024-05-15 03:00:31.204102] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:28.020 03:00:31 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@971 -- # wait 973944 00:34:28.020 [2024-05-15 03:00:31.291347] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:34:28.278 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:28.278 03:00:31 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:34:28.278 00:34:28.278 real 0m37.728s 00:34:28.278 user 2m7.608s 00:34:28.278 sys 0m7.603s 00:34:28.278 03:00:31 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1123 -- # xtrace_disable 00:34:28.278 03:00:31 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:28.278 ************************************ 00:34:28.278 END TEST nvmf_failover 00:34:28.278 ************************************ 00:34:28.278 03:00:31 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:34:28.278 03:00:31 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:34:28.278 03:00:31 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:34:28.536 03:00:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:28.536 ************************************ 00:34:28.536 START TEST nvmf_host_discovery 00:34:28.536 ************************************ 00:34:28.536 03:00:31 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:34:28.536 * Looking for test storage... 00:34:28.536 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:28.536 03:00:31 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:28.536 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:28.536 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.536 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.536 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.536 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:34:28.537 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:34:28.537 00:34:28.537 real 0m0.140s 00:34:28.537 user 0m0.062s 00:34:28.537 sys 0m0.088s 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:34:28.537 03:00:31 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.537 ************************************ 00:34:28.537 END TEST nvmf_host_discovery 00:34:28.537 ************************************ 00:34:28.537 03:00:31 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:34:28.537 03:00:31 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:34:28.537 03:00:31 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:34:28.537 03:00:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:28.796 ************************************ 00:34:28.796 START TEST nvmf_host_multipath_status 00:34:28.796 ************************************ 00:34:28.796 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:34:28.796 * Looking for test storage... 00:34:28.796 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:28.796 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:28.796 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:34:28.797 03:00:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:34:35.403 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:34:35.403 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:34:35.403 Found net devices under 0000:18:00.0: mlx_0_0 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:34:35.403 Found net devices under 0000:18:00.1: mlx_0_1 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:35.403 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:34:35.404 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:35.404 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:34:35.404 altname enp24s0f0np0 00:34:35.404 altname ens785f0np0 00:34:35.404 inet 192.168.100.8/24 scope global mlx_0_0 00:34:35.404 valid_lft forever preferred_lft forever 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:34:35.404 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:35.404 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:34:35.404 altname enp24s0f1np1 00:34:35.404 altname ens785f1np1 00:34:35.404 inet 192.168.100.9/24 scope global mlx_0_1 00:34:35.404 valid_lft forever preferred_lft forever 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:34:35.404 192.168.100.9' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:34:35.404 192.168.100.9' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:34:35.404 192.168.100.9' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@721 -- # xtrace_disable 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=981392 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 981392 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 981392 ']' 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:34:35.404 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:35.404 [2024-05-15 03:00:38.463602] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:34:35.404 [2024-05-15 03:00:38.463678] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.404 EAL: No free 2048 kB hugepages reported on node 1 00:34:35.404 [2024-05-15 03:00:38.574376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:35.404 [2024-05-15 03:00:38.626378] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.404 [2024-05-15 03:00:38.626430] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.404 [2024-05-15 03:00:38.626445] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.404 [2024-05-15 03:00:38.626459] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.404 [2024-05-15 03:00:38.626470] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.404 [2024-05-15 03:00:38.626575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.404 [2024-05-15 03:00:38.626580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.664 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:35.664 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:34:35.664 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:35.664 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:34:35.664 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:35.664 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.664 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=981392 00:34:35.664 03:00:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:35.664 [2024-05-15 03:00:38.951580] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dcc7a0/0x1dd0c90) succeed. 00:34:35.923 [2024-05-15 03:00:38.965058] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dcdca0/0x1e12320) succeed. 00:34:35.923 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:36.182 Malloc0 00:34:36.182 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:36.182 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:36.441 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:36.700 [2024-05-15 03:00:39.767789] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:36.700 [2024-05-15 03:00:39.768168] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:36.700 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:34:36.700 [2024-05-15 03:00:39.936420] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:34:36.700 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=981599 00:34:36.700 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:36.700 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:36.700 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 981599 /var/tmp/bdevperf.sock 00:34:36.700 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 981599 ']' 00:34:36.700 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:36.700 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:34:36.700 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:36.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:36.700 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:34:36.700 03:00:39 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:36.959 03:00:40 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:36.959 03:00:40 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:34:36.959 03:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:37.218 03:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:37.476 Nvme0n1 00:34:37.476 03:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:37.735 Nvme0n1 00:34:37.735 03:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:37.735 03:00:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:40.293 03:00:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:40.293 03:00:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:34:40.293 03:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:40.293 03:00:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:41.232 03:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:41.233 03:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:41.233 03:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.233 03:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:41.492 03:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.492 03:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:41.492 03:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:41.492 03:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.750 03:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:41.750 03:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:41.750 03:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.750 03:00:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:42.009 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.009 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:42.009 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.009 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:42.269 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.269 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:42.269 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.269 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:42.528 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.529 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:42.529 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.529 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:42.795 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.795 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:42.795 03:00:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:43.079 03:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:43.338 03:00:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:44.274 03:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:44.274 03:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:44.274 03:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.274 03:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:44.534 03:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:44.534 03:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:44.534 03:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.534 03:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:44.794 03:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.794 03:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:44.794 03:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.794 03:00:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:45.053 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.053 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:45.053 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:45.053 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.312 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.312 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:45.312 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.312 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:45.572 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.572 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:45.572 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.572 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:45.830 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.830 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:45.830 03:00:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:46.110 03:00:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:34:46.111 03:00:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:47.495 03:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:47.495 03:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:47.495 03:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:47.495 03:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.495 03:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.495 03:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:47.495 03:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.495 03:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:47.755 03:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:47.755 03:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:47.755 03:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.755 03:00:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:48.018 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.018 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:48.018 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.018 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:48.278 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.278 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:48.278 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:48.278 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.538 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.538 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:48.538 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:48.538 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.797 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.797 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:48.797 03:00:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:49.056 03:00:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:34:49.319 03:00:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:50.261 03:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:50.261 03:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:50.261 03:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.261 03:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:50.519 03:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.519 03:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:50.519 03:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.519 03:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:50.778 03:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:50.778 03:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:50.778 03:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.778 03:00:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:51.038 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.038 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:51.038 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.038 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:51.297 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.297 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:51.297 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.297 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:51.566 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.566 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:51.566 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:51.566 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.828 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:51.828 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:51.828 03:00:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:34:52.087 03:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:34:52.346 03:00:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:53.303 03:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:53.303 03:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:53.303 03:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.303 03:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:53.578 03:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:53.578 03:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:53.578 03:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.578 03:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:53.578 03:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:53.578 03:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:53.578 03:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:53.578 03:00:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.836 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.836 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:53.836 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.836 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:54.095 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.095 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:54.095 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.095 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:54.353 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:54.353 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:54.353 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.353 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:54.353 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:54.353 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:54.353 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:34:54.611 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:54.869 03:00:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:55.805 03:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:55.805 03:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:55.805 03:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.805 03:00:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:56.064 03:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:56.064 03:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:56.064 03:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.064 03:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:56.323 03:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.323 03:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:56.323 03:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.323 03:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:56.582 03:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.582 03:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:56.582 03:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.582 03:00:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:56.841 03:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:56.841 03:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:56.841 03:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.841 03:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:57.099 03:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:57.099 03:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:57.099 03:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.099 03:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:57.357 03:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.358 03:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:57.615 03:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:57.615 03:01:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:34:57.873 03:01:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:34:58.134 03:01:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:59.070 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:59.070 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:59.070 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.070 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:59.329 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.329 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:59.329 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.329 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:59.587 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.587 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:59.587 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.587 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:59.844 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.844 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:59.844 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.844 03:01:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:00.102 03:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.102 03:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:00.102 03:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.102 03:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:00.371 03:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.375 03:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:00.375 03:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:00.375 03:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.639 03:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.639 03:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:00.639 03:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:00.639 03:01:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:35:00.898 03:01:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:02.274 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:02.274 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:02.274 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.274 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:02.274 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:02.274 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:02.274 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.274 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:02.532 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.532 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:02.532 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.532 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:02.791 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.791 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:02.791 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.791 03:01:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:03.050 03:01:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.050 03:01:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:03.050 03:01:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.050 03:01:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:03.309 03:01:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.309 03:01:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:03.309 03:01:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.309 03:01:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:03.568 03:01:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.568 03:01:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:03.568 03:01:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:03.826 03:01:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:35:04.084 03:01:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:05.016 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:05.016 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:05.016 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.016 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:05.276 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.276 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:05.276 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.276 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:05.537 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.537 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:05.537 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.537 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:05.797 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.797 03:01:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:05.797 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.797 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:06.062 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.062 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:06.062 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.062 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:06.326 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.326 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:06.326 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.326 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:06.584 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.584 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:06.584 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:35:06.584 03:01:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:35:06.842 03:01:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:08.216 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:08.216 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:08.216 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.216 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:08.216 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.216 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:08.216 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:08.216 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.475 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:08.475 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:08.475 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.475 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:08.735 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.735 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:08.735 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.735 03:01:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:08.735 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.735 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:08.735 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:08.735 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.994 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.994 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:08.994 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:08.994 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.253 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:09.253 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 981599 00:35:09.253 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 981599 ']' 00:35:09.253 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 981599 00:35:09.253 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:35:09.253 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:09.253 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 981599 00:35:09.530 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:35:09.530 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:35:09.530 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 981599' 00:35:09.530 killing process with pid 981599 00:35:09.530 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 981599 00:35:09.530 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 981599 00:35:09.530 Connection closed with partial response: 00:35:09.530 00:35:09.530 00:35:09.530 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 981599 00:35:09.530 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:09.530 [2024-05-15 03:00:39.986440] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:09.530 [2024-05-15 03:00:39.986509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid981599 ] 00:35:09.530 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.530 [2024-05-15 03:00:40.058372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.530 [2024-05-15 03:00:40.100137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:09.530 Running I/O for 90 seconds... 00:35:09.530 [2024-05-15 03:00:55.114358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.530 [2024-05-15 03:00:55.114409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:09.530 [2024-05-15 03:00:55.114865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187000 00:35:09.530 [2024-05-15 03:00:55.114875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.114888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.114902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.114917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.114927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.114939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.114950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.114963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.114972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.114984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.114994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.531 [2024-05-15 03:00:55.115374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x187000 00:35:09.531 [2024-05-15 03:00:55.115940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.115954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.531 [2024-05-15 03:00:55.115963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.116085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.531 [2024-05-15 03:00:55.116099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:09.531 [2024-05-15 03:00:55.116116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.531 [2024-05-15 03:00:55.116125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.532 [2024-05-15 03:00:55.116892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.116924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.116948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.116973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.116996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:09.532 [2024-05-15 03:00:55.117348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x187000 00:35:09.532 [2024-05-15 03:00:55.117357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:00:55.117671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:00:55.117680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.073283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.533 [2024-05-15 03:01:10.073317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.073340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.073363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.073385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.533 [2024-05-15 03:01:10.073407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.073429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.533 [2024-05-15 03:01:10.073451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.073481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.533 [2024-05-15 03:01:10.073504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.073528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.073550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.533 [2024-05-15 03:01:10.073571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.533 [2024-05-15 03:01:10.073596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.073607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.533 [2024-05-15 03:01:10.073618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.533 [2024-05-15 03:01:10.074142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.074165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.074188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.533 [2024-05-15 03:01:10.074211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.533 [2024-05-15 03:01:10.074232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.533 [2024-05-15 03:01:10.074256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.074277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.074298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.074320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.074341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.533 [2024-05-15 03:01:10.074363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:35:09.533 [2024-05-15 03:01:10.074384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:09.533 [2024-05-15 03:01:10.074396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.074405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.074427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.074449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.074470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.074492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.074514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.074536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.074558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.074579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.074601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.074622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.074643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.074664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.074686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.074707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.074728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.074751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.074772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.074794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.074815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.074977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.074991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.075013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.075035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.075056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.075077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.075099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.075120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.075141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.075165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.075186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.075208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.075230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187000 00:35:09.534 [2024-05-15 03:01:10.075251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.075272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.075294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.075316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.075339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.534 [2024-05-15 03:01:10.075361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:09.534 [2024-05-15 03:01:10.075373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.075383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.075395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.075405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.075419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.075428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.075441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.075451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.075464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.075473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.075485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.075496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.075793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.075804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.075818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.075829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.075841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.075851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.077373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.077396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.078885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.078944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.078954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.079174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.079186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.079200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.079210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.079222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.079232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.079245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.079255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.079267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.079276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.079288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.079297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.079309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.079321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.079336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.079345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.079357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.079367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.079380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.079389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.079401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x187000 00:35:09.535 [2024-05-15 03:01:10.079411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:09.535 [2024-05-15 03:01:10.079423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.535 [2024-05-15 03:01:10.079432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.079605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.079627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.079648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.079757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.079799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.079841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.079866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.079887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.079913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.079935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.079957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.079978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.079990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.080000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.080011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.080021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.080033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.080043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.080054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.080064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.080075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.080085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.080097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.080108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.080120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.080130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.080143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.080152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.080164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.080174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.080185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.080195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.081181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.081735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.081756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.081778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.081799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.081820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.081841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.081866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.081887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.081915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.081938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.081959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.081981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.081994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.082003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.082015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.082024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.082036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.082046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.082879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.082900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.082922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.082935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.082948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.082958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.082973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.536 [2024-05-15 03:01:10.082983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:09.536 [2024-05-15 03:01:10.082996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187000 00:35:09.536 [2024-05-15 03:01:10.083006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.083018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.083027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.083040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.083050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.083774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.083785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.083798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.083808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.083820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.083829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.083842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.083851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.083864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.083873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.083885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.083899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.083911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.083921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.083933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.083942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.083959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.083969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.083983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.083993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.084038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.084060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.084104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.084193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.084239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.084260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.084325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.084369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.084457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.084480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.084584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.084606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.084641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.084651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.093606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.093628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.093651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.093673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.093697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.093719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.093743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.093768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.093790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.093812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.093834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.093856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.093878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.093905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.093927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.093948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.093971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.093983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187000 00:35:09.537 [2024-05-15 03:01:10.093994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:09.537 [2024-05-15 03:01:10.094008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:09.537 [2024-05-15 03:01:10.094018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:09.537 Received shutdown signal, test time was about 31.414718 seconds 00:35:09.537 00:35:09.537 Latency(us) 00:35:09.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.537 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:09.537 Verification LBA range: start 0x0 length 0x4000 00:35:09.537 Nvme0n1 : 31.41 11623.89 45.41 0.00 0.00 10990.96 71.23 3019898.88 00:35:09.537 =================================================================================================================== 00:35:09.537 Total : 11623.89 45.41 0.00 0.00 10990.96 71.23 3019898.88 00:35:09.538 03:01:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:35:09.796 rmmod nvme_rdma 00:35:09.796 rmmod nvme_fabrics 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 981392 ']' 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 981392 00:35:09.796 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 981392 ']' 00:35:09.797 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 981392 00:35:09.797 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:35:10.061 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:10.061 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 981392 00:35:10.061 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:35:10.061 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:35:10.061 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 981392' 00:35:10.061 killing process with pid 981392 00:35:10.061 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 981392 00:35:10.061 [2024-05-15 03:01:13.134343] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:10.061 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 981392 00:35:10.061 [2024-05-15 03:01:13.209190] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:35:10.323 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:10.324 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:35:10.324 00:35:10.324 real 0m41.612s 00:35:10.324 user 2m2.940s 00:35:10.324 sys 0m9.806s 00:35:10.324 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # xtrace_disable 00:35:10.324 03:01:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:10.324 ************************************ 00:35:10.324 END TEST nvmf_host_multipath_status 00:35:10.324 ************************************ 00:35:10.324 03:01:13 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:35:10.324 03:01:13 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:35:10.324 03:01:13 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:35:10.324 03:01:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:10.324 ************************************ 00:35:10.324 START TEST nvmf_discovery_remove_ifc 00:35:10.324 ************************************ 00:35:10.324 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:35:10.583 * Looking for test storage... 00:35:10.583 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:10.583 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.583 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:10.583 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.583 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.583 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.583 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.583 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:35:10.584 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:35:10.584 00:35:10.584 real 0m0.138s 00:35:10.584 user 0m0.056s 00:35:10.584 sys 0m0.093s 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:35:10.584 03:01:13 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:10.584 ************************************ 00:35:10.584 END TEST nvmf_discovery_remove_ifc 00:35:10.584 ************************************ 00:35:10.584 03:01:13 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:35:10.584 03:01:13 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:35:10.584 03:01:13 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:35:10.584 03:01:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:10.584 ************************************ 00:35:10.584 START TEST nvmf_identify_kernel_target 00:35:10.584 ************************************ 00:35:10.584 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:35:10.584 * Looking for test storage... 00:35:10.844 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:35:10.844 03:01:13 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:17.496 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:35:17.497 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:35:17.497 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:35:17.497 Found net devices under 0000:18:00.0: mlx_0_0 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:35:17.497 Found net devices under 0000:18:00.1: mlx_0_1 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:35:17.497 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:17.497 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:35:17.497 altname enp24s0f0np0 00:35:17.497 altname ens785f0np0 00:35:17.497 inet 192.168.100.8/24 scope global mlx_0_0 00:35:17.497 valid_lft forever preferred_lft forever 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:35:17.497 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:35:17.498 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:17.498 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:35:17.498 altname enp24s0f1np1 00:35:17.498 altname ens785f1np1 00:35:17.498 inet 192.168.100.9/24 scope global mlx_0_1 00:35:17.498 valid_lft forever preferred_lft forever 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:35:17.498 192.168.100.9' 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:35:17.498 192.168.100.9' 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:35:17.498 192.168.100.9' 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:17.498 03:01:20 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:35:20.039 Waiting for block devices as requested 00:35:20.299 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:35:20.299 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:20.559 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:20.559 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:20.559 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:20.819 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:20.819 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:20.819 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:21.078 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:21.078 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:21.078 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:21.338 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:21.338 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:21.338 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:21.597 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:21.597 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:21.597 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:21.857 No valid GPT data, bailing 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:35:21.857 03:01:24 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:21.857 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -a 192.168.100.8 -t rdma -s 4420 00:35:22.119 00:35:22.119 Discovery Log Number of Records 2, Generation counter 2 00:35:22.119 =====Discovery Log Entry 0====== 00:35:22.119 trtype: rdma 00:35:22.119 adrfam: ipv4 00:35:22.119 subtype: current discovery subsystem 00:35:22.119 treq: not specified, sq flow control disable supported 00:35:22.119 portid: 1 00:35:22.119 trsvcid: 4420 00:35:22.119 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:22.119 traddr: 192.168.100.8 00:35:22.119 eflags: none 00:35:22.119 rdma_prtype: not specified 00:35:22.119 rdma_qptype: connected 00:35:22.119 rdma_cms: rdma-cm 00:35:22.119 rdma_pkey: 0x0000 00:35:22.119 =====Discovery Log Entry 1====== 00:35:22.119 trtype: rdma 00:35:22.119 adrfam: ipv4 00:35:22.119 subtype: nvme subsystem 00:35:22.119 treq: not specified, sq flow control disable supported 00:35:22.119 portid: 1 00:35:22.119 trsvcid: 4420 00:35:22.119 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:22.119 traddr: 192.168.100.8 00:35:22.119 eflags: none 00:35:22.119 rdma_prtype: not specified 00:35:22.119 rdma_qptype: connected 00:35:22.119 rdma_cms: rdma-cm 00:35:22.119 rdma_pkey: 0x0000 00:35:22.119 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:35:22.119 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:22.119 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.119 ===================================================== 00:35:22.119 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:22.119 ===================================================== 00:35:22.119 Controller Capabilities/Features 00:35:22.119 ================================ 00:35:22.119 Vendor ID: 0000 00:35:22.119 Subsystem Vendor ID: 0000 00:35:22.119 Serial Number: ffe7420acb421d5833a5 00:35:22.119 Model Number: Linux 00:35:22.119 Firmware Version: 6.7.0-68 00:35:22.119 Recommended Arb Burst: 0 00:35:22.119 IEEE OUI Identifier: 00 00 00 00:35:22.119 Multi-path I/O 00:35:22.119 May have multiple subsystem ports: No 00:35:22.119 May have multiple controllers: No 00:35:22.119 Associated with SR-IOV VF: No 00:35:22.119 Max Data Transfer Size: Unlimited 00:35:22.119 Max Number of Namespaces: 0 00:35:22.119 Max Number of I/O Queues: 1024 00:35:22.119 NVMe Specification Version (VS): 1.3 00:35:22.119 NVMe Specification Version (Identify): 1.3 00:35:22.119 Maximum Queue Entries: 128 00:35:22.119 Contiguous Queues Required: No 00:35:22.119 Arbitration Mechanisms Supported 00:35:22.119 Weighted Round Robin: Not Supported 00:35:22.119 Vendor Specific: Not Supported 00:35:22.119 Reset Timeout: 7500 ms 00:35:22.119 Doorbell Stride: 4 bytes 00:35:22.119 NVM Subsystem Reset: Not Supported 00:35:22.119 Command Sets Supported 00:35:22.119 NVM Command Set: Supported 00:35:22.119 Boot Partition: Not Supported 00:35:22.119 Memory Page Size Minimum: 4096 bytes 00:35:22.119 Memory Page Size Maximum: 4096 bytes 00:35:22.119 Persistent Memory Region: Not Supported 00:35:22.119 Optional Asynchronous Events Supported 00:35:22.119 Namespace Attribute Notices: Not Supported 00:35:22.119 Firmware Activation Notices: Not Supported 00:35:22.119 ANA Change Notices: Not Supported 00:35:22.119 PLE Aggregate Log Change Notices: Not Supported 00:35:22.119 LBA Status Info Alert Notices: Not Supported 00:35:22.119 EGE Aggregate Log Change Notices: Not Supported 00:35:22.119 Normal NVM Subsystem Shutdown event: Not Supported 00:35:22.119 Zone Descriptor Change Notices: Not Supported 00:35:22.119 Discovery Log Change Notices: Supported 00:35:22.119 Controller Attributes 00:35:22.119 128-bit Host Identifier: Not Supported 00:35:22.119 Non-Operational Permissive Mode: Not Supported 00:35:22.119 NVM Sets: Not Supported 00:35:22.119 Read Recovery Levels: Not Supported 00:35:22.119 Endurance Groups: Not Supported 00:35:22.119 Predictable Latency Mode: Not Supported 00:35:22.119 Traffic Based Keep ALive: Not Supported 00:35:22.119 Namespace Granularity: Not Supported 00:35:22.119 SQ Associations: Not Supported 00:35:22.119 UUID List: Not Supported 00:35:22.119 Multi-Domain Subsystem: Not Supported 00:35:22.119 Fixed Capacity Management: Not Supported 00:35:22.119 Variable Capacity Management: Not Supported 00:35:22.119 Delete Endurance Group: Not Supported 00:35:22.119 Delete NVM Set: Not Supported 00:35:22.119 Extended LBA Formats Supported: Not Supported 00:35:22.119 Flexible Data Placement Supported: Not Supported 00:35:22.119 00:35:22.119 Controller Memory Buffer Support 00:35:22.119 ================================ 00:35:22.120 Supported: No 00:35:22.120 00:35:22.120 Persistent Memory Region Support 00:35:22.120 ================================ 00:35:22.120 Supported: No 00:35:22.120 00:35:22.120 Admin Command Set Attributes 00:35:22.120 ============================ 00:35:22.120 Security Send/Receive: Not Supported 00:35:22.120 Format NVM: Not Supported 00:35:22.120 Firmware Activate/Download: Not Supported 00:35:22.120 Namespace Management: Not Supported 00:35:22.120 Device Self-Test: Not Supported 00:35:22.120 Directives: Not Supported 00:35:22.120 NVMe-MI: Not Supported 00:35:22.120 Virtualization Management: Not Supported 00:35:22.120 Doorbell Buffer Config: Not Supported 00:35:22.120 Get LBA Status Capability: Not Supported 00:35:22.120 Command & Feature Lockdown Capability: Not Supported 00:35:22.120 Abort Command Limit: 1 00:35:22.120 Async Event Request Limit: 1 00:35:22.120 Number of Firmware Slots: N/A 00:35:22.120 Firmware Slot 1 Read-Only: N/A 00:35:22.120 Firmware Activation Without Reset: N/A 00:35:22.120 Multiple Update Detection Support: N/A 00:35:22.120 Firmware Update Granularity: No Information Provided 00:35:22.120 Per-Namespace SMART Log: No 00:35:22.120 Asymmetric Namespace Access Log Page: Not Supported 00:35:22.120 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:22.120 Command Effects Log Page: Not Supported 00:35:22.120 Get Log Page Extended Data: Supported 00:35:22.120 Telemetry Log Pages: Not Supported 00:35:22.120 Persistent Event Log Pages: Not Supported 00:35:22.120 Supported Log Pages Log Page: May Support 00:35:22.120 Commands Supported & Effects Log Page: Not Supported 00:35:22.120 Feature Identifiers & Effects Log Page:May Support 00:35:22.120 NVMe-MI Commands & Effects Log Page: May Support 00:35:22.120 Data Area 4 for Telemetry Log: Not Supported 00:35:22.120 Error Log Page Entries Supported: 1 00:35:22.120 Keep Alive: Not Supported 00:35:22.120 00:35:22.120 NVM Command Set Attributes 00:35:22.120 ========================== 00:35:22.120 Submission Queue Entry Size 00:35:22.120 Max: 1 00:35:22.120 Min: 1 00:35:22.120 Completion Queue Entry Size 00:35:22.120 Max: 1 00:35:22.120 Min: 1 00:35:22.120 Number of Namespaces: 0 00:35:22.120 Compare Command: Not Supported 00:35:22.120 Write Uncorrectable Command: Not Supported 00:35:22.120 Dataset Management Command: Not Supported 00:35:22.120 Write Zeroes Command: Not Supported 00:35:22.120 Set Features Save Field: Not Supported 00:35:22.120 Reservations: Not Supported 00:35:22.120 Timestamp: Not Supported 00:35:22.120 Copy: Not Supported 00:35:22.120 Volatile Write Cache: Not Present 00:35:22.120 Atomic Write Unit (Normal): 1 00:35:22.120 Atomic Write Unit (PFail): 1 00:35:22.120 Atomic Compare & Write Unit: 1 00:35:22.120 Fused Compare & Write: Not Supported 00:35:22.120 Scatter-Gather List 00:35:22.120 SGL Command Set: Supported 00:35:22.120 SGL Keyed: Supported 00:35:22.120 SGL Bit Bucket Descriptor: Not Supported 00:35:22.120 SGL Metadata Pointer: Not Supported 00:35:22.120 Oversized SGL: Not Supported 00:35:22.120 SGL Metadata Address: Not Supported 00:35:22.120 SGL Offset: Supported 00:35:22.120 Transport SGL Data Block: Not Supported 00:35:22.120 Replay Protected Memory Block: Not Supported 00:35:22.120 00:35:22.120 Firmware Slot Information 00:35:22.120 ========================= 00:35:22.120 Active slot: 0 00:35:22.120 00:35:22.120 00:35:22.120 Error Log 00:35:22.120 ========= 00:35:22.120 00:35:22.120 Active Namespaces 00:35:22.120 ================= 00:35:22.120 Discovery Log Page 00:35:22.120 ================== 00:35:22.120 Generation Counter: 2 00:35:22.120 Number of Records: 2 00:35:22.120 Record Format: 0 00:35:22.120 00:35:22.120 Discovery Log Entry 0 00:35:22.120 ---------------------- 00:35:22.120 Transport Type: 1 (RDMA) 00:35:22.120 Address Family: 1 (IPv4) 00:35:22.120 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:22.120 Entry Flags: 00:35:22.120 Duplicate Returned Information: 0 00:35:22.120 Explicit Persistent Connection Support for Discovery: 0 00:35:22.120 Transport Requirements: 00:35:22.120 Secure Channel: Not Specified 00:35:22.120 Port ID: 1 (0x0001) 00:35:22.120 Controller ID: 65535 (0xffff) 00:35:22.120 Admin Max SQ Size: 32 00:35:22.120 Transport Service Identifier: 4420 00:35:22.120 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:22.120 Transport Address: 192.168.100.8 00:35:22.120 Transport Specific Address Subtype - RDMA 00:35:22.120 RDMA QP Service Type: 1 (Reliable Connected) 00:35:22.120 RDMA Provider Type: 1 (No provider specified) 00:35:22.120 RDMA CM Service: 1 (RDMA_CM) 00:35:22.120 Discovery Log Entry 1 00:35:22.120 ---------------------- 00:35:22.120 Transport Type: 1 (RDMA) 00:35:22.120 Address Family: 1 (IPv4) 00:35:22.120 Subsystem Type: 2 (NVM Subsystem) 00:35:22.120 Entry Flags: 00:35:22.120 Duplicate Returned Information: 0 00:35:22.120 Explicit Persistent Connection Support for Discovery: 0 00:35:22.120 Transport Requirements: 00:35:22.120 Secure Channel: Not Specified 00:35:22.120 Port ID: 1 (0x0001) 00:35:22.120 Controller ID: 65535 (0xffff) 00:35:22.120 Admin Max SQ Size: 32 00:35:22.120 Transport Service Identifier: 4420 00:35:22.120 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:22.120 Transport Address: 192.168.100.8 00:35:22.120 Transport Specific Address Subtype - RDMA 00:35:22.120 RDMA QP Service Type: 1 (Reliable Connected) 00:35:22.120 RDMA Provider Type: 1 (No provider specified) 00:35:22.120 RDMA CM Service: 1 (RDMA_CM) 00:35:22.120 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:22.120 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.382 get_feature(0x01) failed 00:35:22.382 get_feature(0x02) failed 00:35:22.382 get_feature(0x04) failed 00:35:22.382 ===================================================== 00:35:22.382 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:35:22.382 ===================================================== 00:35:22.382 Controller Capabilities/Features 00:35:22.382 ================================ 00:35:22.382 Vendor ID: 0000 00:35:22.382 Subsystem Vendor ID: 0000 00:35:22.382 Serial Number: 5b73183fcc58d3feb501 00:35:22.382 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:22.382 Firmware Version: 6.7.0-68 00:35:22.382 Recommended Arb Burst: 6 00:35:22.382 IEEE OUI Identifier: 00 00 00 00:35:22.382 Multi-path I/O 00:35:22.382 May have multiple subsystem ports: Yes 00:35:22.382 May have multiple controllers: Yes 00:35:22.382 Associated with SR-IOV VF: No 00:35:22.382 Max Data Transfer Size: 1048576 00:35:22.382 Max Number of Namespaces: 1024 00:35:22.382 Max Number of I/O Queues: 128 00:35:22.382 NVMe Specification Version (VS): 1.3 00:35:22.382 NVMe Specification Version (Identify): 1.3 00:35:22.382 Maximum Queue Entries: 128 00:35:22.382 Contiguous Queues Required: No 00:35:22.382 Arbitration Mechanisms Supported 00:35:22.382 Weighted Round Robin: Not Supported 00:35:22.382 Vendor Specific: Not Supported 00:35:22.382 Reset Timeout: 7500 ms 00:35:22.382 Doorbell Stride: 4 bytes 00:35:22.382 NVM Subsystem Reset: Not Supported 00:35:22.382 Command Sets Supported 00:35:22.382 NVM Command Set: Supported 00:35:22.382 Boot Partition: Not Supported 00:35:22.382 Memory Page Size Minimum: 4096 bytes 00:35:22.382 Memory Page Size Maximum: 4096 bytes 00:35:22.382 Persistent Memory Region: Not Supported 00:35:22.382 Optional Asynchronous Events Supported 00:35:22.382 Namespace Attribute Notices: Supported 00:35:22.382 Firmware Activation Notices: Not Supported 00:35:22.382 ANA Change Notices: Supported 00:35:22.382 PLE Aggregate Log Change Notices: Not Supported 00:35:22.382 LBA Status Info Alert Notices: Not Supported 00:35:22.382 EGE Aggregate Log Change Notices: Not Supported 00:35:22.382 Normal NVM Subsystem Shutdown event: Not Supported 00:35:22.382 Zone Descriptor Change Notices: Not Supported 00:35:22.382 Discovery Log Change Notices: Not Supported 00:35:22.382 Controller Attributes 00:35:22.382 128-bit Host Identifier: Supported 00:35:22.382 Non-Operational Permissive Mode: Not Supported 00:35:22.382 NVM Sets: Not Supported 00:35:22.382 Read Recovery Levels: Not Supported 00:35:22.382 Endurance Groups: Not Supported 00:35:22.382 Predictable Latency Mode: Not Supported 00:35:22.382 Traffic Based Keep ALive: Supported 00:35:22.382 Namespace Granularity: Not Supported 00:35:22.382 SQ Associations: Not Supported 00:35:22.382 UUID List: Not Supported 00:35:22.382 Multi-Domain Subsystem: Not Supported 00:35:22.382 Fixed Capacity Management: Not Supported 00:35:22.382 Variable Capacity Management: Not Supported 00:35:22.382 Delete Endurance Group: Not Supported 00:35:22.382 Delete NVM Set: Not Supported 00:35:22.382 Extended LBA Formats Supported: Not Supported 00:35:22.382 Flexible Data Placement Supported: Not Supported 00:35:22.382 00:35:22.383 Controller Memory Buffer Support 00:35:22.383 ================================ 00:35:22.383 Supported: No 00:35:22.383 00:35:22.383 Persistent Memory Region Support 00:35:22.383 ================================ 00:35:22.383 Supported: No 00:35:22.383 00:35:22.383 Admin Command Set Attributes 00:35:22.383 ============================ 00:35:22.383 Security Send/Receive: Not Supported 00:35:22.383 Format NVM: Not Supported 00:35:22.383 Firmware Activate/Download: Not Supported 00:35:22.383 Namespace Management: Not Supported 00:35:22.383 Device Self-Test: Not Supported 00:35:22.383 Directives: Not Supported 00:35:22.383 NVMe-MI: Not Supported 00:35:22.383 Virtualization Management: Not Supported 00:35:22.383 Doorbell Buffer Config: Not Supported 00:35:22.383 Get LBA Status Capability: Not Supported 00:35:22.383 Command & Feature Lockdown Capability: Not Supported 00:35:22.383 Abort Command Limit: 4 00:35:22.383 Async Event Request Limit: 4 00:35:22.383 Number of Firmware Slots: N/A 00:35:22.383 Firmware Slot 1 Read-Only: N/A 00:35:22.383 Firmware Activation Without Reset: N/A 00:35:22.383 Multiple Update Detection Support: N/A 00:35:22.383 Firmware Update Granularity: No Information Provided 00:35:22.383 Per-Namespace SMART Log: Yes 00:35:22.383 Asymmetric Namespace Access Log Page: Supported 00:35:22.383 ANA Transition Time : 10 sec 00:35:22.383 00:35:22.383 Asymmetric Namespace Access Capabilities 00:35:22.383 ANA Optimized State : Supported 00:35:22.383 ANA Non-Optimized State : Supported 00:35:22.383 ANA Inaccessible State : Supported 00:35:22.383 ANA Persistent Loss State : Supported 00:35:22.383 ANA Change State : Supported 00:35:22.383 ANAGRPID is not changed : No 00:35:22.383 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:22.383 00:35:22.383 ANA Group Identifier Maximum : 128 00:35:22.383 Number of ANA Group Identifiers : 128 00:35:22.383 Max Number of Allowed Namespaces : 1024 00:35:22.383 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:22.383 Command Effects Log Page: Supported 00:35:22.383 Get Log Page Extended Data: Supported 00:35:22.383 Telemetry Log Pages: Not Supported 00:35:22.383 Persistent Event Log Pages: Not Supported 00:35:22.383 Supported Log Pages Log Page: May Support 00:35:22.383 Commands Supported & Effects Log Page: Not Supported 00:35:22.383 Feature Identifiers & Effects Log Page:May Support 00:35:22.383 NVMe-MI Commands & Effects Log Page: May Support 00:35:22.383 Data Area 4 for Telemetry Log: Not Supported 00:35:22.383 Error Log Page Entries Supported: 128 00:35:22.383 Keep Alive: Supported 00:35:22.383 Keep Alive Granularity: 1000 ms 00:35:22.383 00:35:22.383 NVM Command Set Attributes 00:35:22.383 ========================== 00:35:22.383 Submission Queue Entry Size 00:35:22.383 Max: 64 00:35:22.383 Min: 64 00:35:22.383 Completion Queue Entry Size 00:35:22.383 Max: 16 00:35:22.383 Min: 16 00:35:22.383 Number of Namespaces: 1024 00:35:22.383 Compare Command: Not Supported 00:35:22.383 Write Uncorrectable Command: Not Supported 00:35:22.383 Dataset Management Command: Supported 00:35:22.383 Write Zeroes Command: Supported 00:35:22.383 Set Features Save Field: Not Supported 00:35:22.383 Reservations: Not Supported 00:35:22.383 Timestamp: Not Supported 00:35:22.383 Copy: Not Supported 00:35:22.383 Volatile Write Cache: Present 00:35:22.383 Atomic Write Unit (Normal): 1 00:35:22.383 Atomic Write Unit (PFail): 1 00:35:22.383 Atomic Compare & Write Unit: 1 00:35:22.383 Fused Compare & Write: Not Supported 00:35:22.383 Scatter-Gather List 00:35:22.383 SGL Command Set: Supported 00:35:22.383 SGL Keyed: Supported 00:35:22.383 SGL Bit Bucket Descriptor: Not Supported 00:35:22.383 SGL Metadata Pointer: Not Supported 00:35:22.383 Oversized SGL: Not Supported 00:35:22.383 SGL Metadata Address: Not Supported 00:35:22.383 SGL Offset: Supported 00:35:22.383 Transport SGL Data Block: Not Supported 00:35:22.383 Replay Protected Memory Block: Not Supported 00:35:22.383 00:35:22.383 Firmware Slot Information 00:35:22.383 ========================= 00:35:22.383 Active slot: 0 00:35:22.383 00:35:22.383 Asymmetric Namespace Access 00:35:22.383 =========================== 00:35:22.383 Change Count : 0 00:35:22.383 Number of ANA Group Descriptors : 1 00:35:22.383 ANA Group Descriptor : 0 00:35:22.383 ANA Group ID : 1 00:35:22.383 Number of NSID Values : 1 00:35:22.383 Change Count : 0 00:35:22.383 ANA State : 1 00:35:22.383 Namespace Identifier : 1 00:35:22.383 00:35:22.383 Commands Supported and Effects 00:35:22.383 ============================== 00:35:22.383 Admin Commands 00:35:22.383 -------------- 00:35:22.383 Get Log Page (02h): Supported 00:35:22.383 Identify (06h): Supported 00:35:22.383 Abort (08h): Supported 00:35:22.383 Set Features (09h): Supported 00:35:22.383 Get Features (0Ah): Supported 00:35:22.383 Asynchronous Event Request (0Ch): Supported 00:35:22.383 Keep Alive (18h): Supported 00:35:22.383 I/O Commands 00:35:22.383 ------------ 00:35:22.383 Flush (00h): Supported 00:35:22.383 Write (01h): Supported LBA-Change 00:35:22.383 Read (02h): Supported 00:35:22.383 Write Zeroes (08h): Supported LBA-Change 00:35:22.383 Dataset Management (09h): Supported 00:35:22.383 00:35:22.383 Error Log 00:35:22.383 ========= 00:35:22.383 Entry: 0 00:35:22.383 Error Count: 0x3 00:35:22.383 Submission Queue Id: 0x0 00:35:22.383 Command Id: 0x5 00:35:22.383 Phase Bit: 0 00:35:22.383 Status Code: 0x2 00:35:22.383 Status Code Type: 0x0 00:35:22.383 Do Not Retry: 1 00:35:22.383 Error Location: 0x28 00:35:22.383 LBA: 0x0 00:35:22.383 Namespace: 0x0 00:35:22.383 Vendor Log Page: 0x0 00:35:22.383 ----------- 00:35:22.383 Entry: 1 00:35:22.383 Error Count: 0x2 00:35:22.384 Submission Queue Id: 0x0 00:35:22.384 Command Id: 0x5 00:35:22.384 Phase Bit: 0 00:35:22.384 Status Code: 0x2 00:35:22.384 Status Code Type: 0x0 00:35:22.384 Do Not Retry: 1 00:35:22.384 Error Location: 0x28 00:35:22.384 LBA: 0x0 00:35:22.384 Namespace: 0x0 00:35:22.384 Vendor Log Page: 0x0 00:35:22.384 ----------- 00:35:22.384 Entry: 2 00:35:22.384 Error Count: 0x1 00:35:22.384 Submission Queue Id: 0x0 00:35:22.384 Command Id: 0x0 00:35:22.384 Phase Bit: 0 00:35:22.384 Status Code: 0x2 00:35:22.384 Status Code Type: 0x0 00:35:22.384 Do Not Retry: 1 00:35:22.384 Error Location: 0x28 00:35:22.384 LBA: 0x0 00:35:22.384 Namespace: 0x0 00:35:22.384 Vendor Log Page: 0x0 00:35:22.384 00:35:22.384 Number of Queues 00:35:22.384 ================ 00:35:22.384 Number of I/O Submission Queues: 128 00:35:22.384 Number of I/O Completion Queues: 128 00:35:22.384 00:35:22.384 ZNS Specific Controller Data 00:35:22.384 ============================ 00:35:22.384 Zone Append Size Limit: 0 00:35:22.384 00:35:22.384 00:35:22.384 Active Namespaces 00:35:22.384 ================= 00:35:22.384 get_feature(0x05) failed 00:35:22.384 Namespace ID:1 00:35:22.384 Command Set Identifier: NVM (00h) 00:35:22.384 Deallocate: Supported 00:35:22.384 Deallocated/Unwritten Error: Not Supported 00:35:22.384 Deallocated Read Value: Unknown 00:35:22.384 Deallocate in Write Zeroes: Not Supported 00:35:22.384 Deallocated Guard Field: 0xFFFF 00:35:22.384 Flush: Supported 00:35:22.384 Reservation: Not Supported 00:35:22.384 Namespace Sharing Capabilities: Multiple Controllers 00:35:22.384 Size (in LBAs): 3750748848 (1788GiB) 00:35:22.384 Capacity (in LBAs): 3750748848 (1788GiB) 00:35:22.384 Utilization (in LBAs): 3750748848 (1788GiB) 00:35:22.384 UUID: c0e8ebe3-f559-4519-9c0e-0e64b5248eb8 00:35:22.384 Thin Provisioning: Not Supported 00:35:22.384 Per-NS Atomic Units: Yes 00:35:22.384 Atomic Write Unit (Normal): 8 00:35:22.384 Atomic Write Unit (PFail): 8 00:35:22.384 Preferred Write Granularity: 8 00:35:22.384 Atomic Compare & Write Unit: 8 00:35:22.384 Atomic Boundary Size (Normal): 0 00:35:22.384 Atomic Boundary Size (PFail): 0 00:35:22.384 Atomic Boundary Offset: 0 00:35:22.384 NGUID/EUI64 Never Reused: No 00:35:22.384 ANA group ID: 1 00:35:22.384 Namespace Write Protected: No 00:35:22.384 Number of LBA Formats: 1 00:35:22.384 Current LBA Format: LBA Format #00 00:35:22.384 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:22.384 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:35:22.384 rmmod nvme_rdma 00:35:22.384 rmmod nvme_fabrics 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:35:22.384 03:01:25 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:35:25.679 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:35:25.679 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:25.679 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:25.938 00:35:25.938 real 0m15.293s 00:35:25.938 user 0m4.449s 00:35:25.938 sys 0m9.899s 00:35:25.938 03:01:29 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:35:25.938 03:01:29 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:25.938 ************************************ 00:35:25.938 END TEST nvmf_identify_kernel_target 00:35:25.938 ************************************ 00:35:25.938 03:01:29 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:35:25.938 03:01:29 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:35:25.938 03:01:29 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:35:25.938 03:01:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:25.938 ************************************ 00:35:25.938 START TEST nvmf_auth_host 00:35:25.938 ************************************ 00:35:25.938 03:01:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:35:26.197 * Looking for test storage... 00:35:26.197 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.197 03:01:29 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:35:26.198 03:01:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:35:32.772 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:35:32.773 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:35:32.773 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:35:32.773 Found net devices under 0000:18:00.0: mlx_0_0 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:35:32.773 Found net devices under 0000:18:00.1: mlx_0_1 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:35:32.773 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:32.773 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:35:32.773 altname enp24s0f0np0 00:35:32.773 altname ens785f0np0 00:35:32.773 inet 192.168.100.8/24 scope global mlx_0_0 00:35:32.773 valid_lft forever preferred_lft forever 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:35:32.773 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:32.773 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:35:32.773 altname enp24s0f1np1 00:35:32.773 altname ens785f1np1 00:35:32.773 inet 192.168.100.9/24 scope global mlx_0_1 00:35:32.773 valid_lft forever preferred_lft forever 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:32.773 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:35:32.774 192.168.100.9' 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:35:32.774 192.168.100.9' 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:35:32.774 192.168.100.9' 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=994398 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 994398 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 994398 ']' 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:32.774 03:01:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.038 03:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:33.038 03:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:35:33.038 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:33.038 03:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:35:33.038 03:01:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9cfe0c4a03e7215ad8845950300b1d8d 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Bbg 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9cfe0c4a03e7215ad8845950300b1d8d 0 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9cfe0c4a03e7215ad8845950300b1d8d 0 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9cfe0c4a03e7215ad8845950300b1d8d 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Bbg 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Bbg 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Bbg 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c7f6c96214e7320ac7d5249b86f3f9d1e9b247f8b4dcaf18e6dfd7d1919c3395 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4Mn 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c7f6c96214e7320ac7d5249b86f3f9d1e9b247f8b4dcaf18e6dfd7d1919c3395 3 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c7f6c96214e7320ac7d5249b86f3f9d1e9b247f8b4dcaf18e6dfd7d1919c3395 3 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c7f6c96214e7320ac7d5249b86f3f9d1e9b247f8b4dcaf18e6dfd7d1919c3395 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4Mn 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4Mn 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.4Mn 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=93c8e62b3083067368eee35e89a9707d86ffbd609aa5b96d 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.k5j 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 93c8e62b3083067368eee35e89a9707d86ffbd609aa5b96d 0 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 93c8e62b3083067368eee35e89a9707d86ffbd609aa5b96d 0 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=93c8e62b3083067368eee35e89a9707d86ffbd609aa5b96d 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.k5j 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.k5j 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.k5j 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f9ba35ff8ea66fb017ec59c977938a4e519d5dcf9366529c 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.k7U 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f9ba35ff8ea66fb017ec59c977938a4e519d5dcf9366529c 2 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f9ba35ff8ea66fb017ec59c977938a4e519d5dcf9366529c 2 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f9ba35ff8ea66fb017ec59c977938a4e519d5dcf9366529c 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:33.300 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.k7U 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.k7U 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.k7U 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a4ca910dac39668a41fdf387f51a10d6 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.IVL 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a4ca910dac39668a41fdf387f51a10d6 1 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a4ca910dac39668a41fdf387f51a10d6 1 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a4ca910dac39668a41fdf387f51a10d6 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.IVL 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.IVL 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.IVL 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cfe0d612f332c8f7e162ae8e5f1c810d 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.A4t 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cfe0d612f332c8f7e162ae8e5f1c810d 1 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cfe0d612f332c8f7e162ae8e5f1c810d 1 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cfe0d612f332c8f7e162ae8e5f1c810d 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.A4t 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.A4t 00:35:33.559 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.A4t 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8761d64d3cacdee346de436b4f38f379577ac348608645bc 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.WuR 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8761d64d3cacdee346de436b4f38f379577ac348608645bc 2 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8761d64d3cacdee346de436b4f38f379577ac348608645bc 2 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8761d64d3cacdee346de436b4f38f379577ac348608645bc 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:33.560 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:33.818 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.WuR 00:35:33.818 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.WuR 00:35:33.818 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.WuR 00:35:33.818 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:33.818 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a65c195d820506567f877a95155ce19c 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.RoB 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a65c195d820506567f877a95155ce19c 0 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a65c195d820506567f877a95155ce19c 0 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a65c195d820506567f877a95155ce19c 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.RoB 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.RoB 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.RoB 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ab403b0be4d01c686068d86f0e9624d77a22754d4e701f8f228ef8e825a109f3 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.j6Y 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ab403b0be4d01c686068d86f0e9624d77a22754d4e701f8f228ef8e825a109f3 3 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ab403b0be4d01c686068d86f0e9624d77a22754d4e701f8f228ef8e825a109f3 3 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ab403b0be4d01c686068d86f0e9624d77a22754d4e701f8f228ef8e825a109f3 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.j6Y 00:35:33.819 03:01:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.j6Y 00:35:33.819 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.j6Y 00:35:33.819 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:33.819 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 994398 00:35:33.819 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 994398 ']' 00:35:33.819 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.819 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:33.819 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.819 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:33.819 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Bbg 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.4Mn ]] 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Mn 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.k5j 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.k7U ]] 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k7U 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.IVL 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.A4t ]] 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.A4t 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.WuR 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:34.101 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.RoB ]] 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.RoB 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.j6Y 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:34.102 03:01:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:35:37.396 Waiting for block devices as requested 00:35:37.396 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:35:37.396 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:37.396 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:37.396 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:37.396 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:37.655 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:37.655 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:37.655 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:37.913 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:37.913 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:37.913 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:38.170 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:38.170 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:38.170 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:38.429 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:38.429 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:38.429 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:39.365 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:39.365 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:39.365 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:39.365 03:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:39.366 No valid GPT data, bailing 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:39.366 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e --hostid=00e1c02b-5999-e811-99d6-a4bf01488b4e -a 192.168.100.8 -t rdma -s 4420 00:35:39.625 00:35:39.625 Discovery Log Number of Records 2, Generation counter 2 00:35:39.625 =====Discovery Log Entry 0====== 00:35:39.625 trtype: rdma 00:35:39.625 adrfam: ipv4 00:35:39.625 subtype: current discovery subsystem 00:35:39.625 treq: not specified, sq flow control disable supported 00:35:39.625 portid: 1 00:35:39.625 trsvcid: 4420 00:35:39.625 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:39.625 traddr: 192.168.100.8 00:35:39.625 eflags: none 00:35:39.625 rdma_prtype: not specified 00:35:39.625 rdma_qptype: connected 00:35:39.625 rdma_cms: rdma-cm 00:35:39.625 rdma_pkey: 0x0000 00:35:39.625 =====Discovery Log Entry 1====== 00:35:39.625 trtype: rdma 00:35:39.625 adrfam: ipv4 00:35:39.625 subtype: nvme subsystem 00:35:39.625 treq: not specified, sq flow control disable supported 00:35:39.625 portid: 1 00:35:39.625 trsvcid: 4420 00:35:39.625 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:39.625 traddr: 192.168.100.8 00:35:39.625 eflags: none 00:35:39.625 rdma_prtype: not specified 00:35:39.625 rdma_qptype: connected 00:35:39.625 rdma_cms: rdma-cm 00:35:39.625 rdma_pkey: 0x0000 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.625 03:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.626 03:01:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.886 nvme0n1 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:39.887 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:39.888 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:39.889 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:39.889 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.889 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.889 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:39.889 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:39.889 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:39.889 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:39.889 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:39.889 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:39.889 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:39.889 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.151 nvme0n1 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:40.151 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:40.412 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.413 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.413 nvme0n1 00:35:40.413 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.413 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.413 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.413 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.413 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.413 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.413 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.413 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.413 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.413 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.727 nvme0n1 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.727 03:01:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:40.996 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.997 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.997 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.997 nvme0n1 00:35:40.997 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.997 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.997 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.997 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.997 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.997 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:40.997 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.997 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.997 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:40.997 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.256 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:41.256 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.256 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:41.256 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.256 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.256 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:41.256 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:41.256 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:41.256 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:41.256 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.257 nvme0n1 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.257 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:41.517 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.517 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.517 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.517 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.517 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:41.517 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.517 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.517 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:41.517 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.518 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.518 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:41.518 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:41.518 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:41.518 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:41.518 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.518 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:41.776 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:41.776 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:35:41.776 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:41.776 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:41.776 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.776 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:41.776 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:41.776 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:41.776 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.776 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:41.777 03:01:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.036 nvme0n1 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.036 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:42.037 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:42.037 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:42.037 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:42.037 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:42.037 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:42.037 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.037 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.304 nvme0n1 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.304 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.569 nvme0n1 00:35:42.569 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.569 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.569 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.569 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.569 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.569 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.569 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.569 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.569 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.569 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.829 03:01:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.829 nvme0n1 00:35:42.829 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:42.829 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.829 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:42.829 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.829 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.829 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.087 03:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.088 03:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:43.088 03:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:43.088 03:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:43.088 03:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:43.088 03:01:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:43.088 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:43.088 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:43.088 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.346 nvme0n1 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.347 03:01:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:43.915 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:43.915 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:35:43.915 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:43.915 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:43.916 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.174 nvme0n1 00:35:44.174 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.174 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.174 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.174 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.174 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.174 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:44.439 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:44.440 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:44.440 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.440 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.440 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:44.440 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:44.440 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:44.440 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:44.440 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:44.440 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:44.440 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.440 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.702 nvme0n1 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.702 03:01:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.962 nvme0n1 00:35:44.962 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:44.962 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.962 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:44.962 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.962 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:45.220 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.479 nvme0n1 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:45.479 03:01:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.739 nvme0n1 00:35:45.739 03:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:45.999 03:01:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:47.904 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:47.904 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:35:47.904 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:47.904 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:47.904 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.904 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:47.904 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:47.904 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:47.905 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.472 nvme0n1 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.472 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:48.473 03:01:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.041 nvme0n1 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:49.041 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:49.042 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:49.042 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:49.042 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:49.042 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.610 nvme0n1 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:49.610 03:01:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.177 nvme0n1 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.177 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:50.178 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.746 nvme0n1 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:50.746 03:01:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.742 nvme0n1 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:51.742 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:51.743 03:01:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.337 nvme0n1 00:35:52.337 03:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:52.337 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.337 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.337 03:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:52.337 03:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:52.623 03:01:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.562 nvme0n1 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:53.562 03:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:53.563 03:01:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.135 nvme0n1 00:35:54.135 03:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:54.135 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.135 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.135 03:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:54.135 03:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.135 03:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:54.135 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.135 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.135 03:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:54.135 03:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:54.396 03:01:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.965 nvme0n1 00:35:54.965 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:54.965 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.965 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.965 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:54.965 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.225 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.485 nvme0n1 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.485 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.745 nvme0n1 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.745 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:55.746 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:55.746 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:55.746 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.746 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.746 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:55.746 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:55.746 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:55.746 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:55.746 03:01:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:55.746 03:01:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.746 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.746 03:01:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.005 nvme0n1 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:56.005 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:56.006 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.006 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.006 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.265 nvme0n1 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.265 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.525 nvme0n1 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.525 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.785 03:01:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.785 nvme0n1 00:35:56.785 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.785 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.785 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.785 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.785 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.785 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.053 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.313 nvme0n1 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.313 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.572 nvme0n1 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:57.572 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.573 03:02:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.832 nvme0n1 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:57.832 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:57.833 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:57.833 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:57.833 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:57.833 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:57.833 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.833 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.092 nvme0n1 00:35:58.092 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:58.092 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.092 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.092 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:58.092 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.092 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:58.092 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.092 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.092 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:58.092 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:58.350 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.608 nvme0n1 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:58.608 03:02:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.867 nvme0n1 00:35:58.867 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:58.867 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.867 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.867 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:58.867 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.867 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.127 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.127 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.127 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.127 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.127 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.127 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.127 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.128 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.389 nvme0n1 00:35:59.389 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.389 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.389 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.389 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.389 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.390 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.649 nvme0n1 00:35:59.649 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.649 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.649 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.649 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.649 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.649 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.914 03:02:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.914 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.914 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.914 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.914 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.914 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.914 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.914 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.914 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:35:59.914 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:59.915 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:35:59.915 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:35:59.915 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:35:59.915 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:59.915 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.915 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.176 nvme0n1 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:00.176 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.177 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.753 nvme0n1 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:00.753 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:00.754 03:02:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.330 nvme0n1 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:01.330 03:02:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.900 nvme0n1 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:01.900 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.469 nvme0n1 00:36:02.469 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:02.469 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.469 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:02.469 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.469 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.469 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:02.469 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.469 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.469 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:02.470 03:02:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.039 nvme0n1 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.039 03:02:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.040 03:02:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:03.040 03:02:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:03.040 03:02:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:03.040 03:02:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:03.040 03:02:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:03.040 03:02:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:03.040 03:02:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:03.040 03:02:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.979 nvme0n1 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:03.979 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.917 nvme0n1 00:36:04.917 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.917 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.917 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.917 03:02:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.917 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.917 03:02:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.917 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.855 nvme0n1 00:36:05.855 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.855 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.855 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.855 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.855 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.855 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.855 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.855 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.855 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.855 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.856 03:02:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.424 nvme0n1 00:36:06.424 03:02:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.424 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.424 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.424 03:02:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.424 03:02:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.683 03:02:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.620 nvme0n1 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.620 nvme0n1 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.620 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.878 03:02:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.136 nvme0n1 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:08.136 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.137 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.400 nvme0n1 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.400 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.660 nvme0n1 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.660 03:02:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.919 nvme0n1 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.920 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.179 nvme0n1 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.180 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.439 nvme0n1 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.439 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.440 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.440 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.440 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.699 03:02:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.959 nvme0n1 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.959 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.219 nvme0n1 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.219 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.478 nvme0n1 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.478 03:02:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.046 nvme0n1 00:36:11.046 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.046 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.047 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.306 nvme0n1 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.306 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.565 nvme0n1 00:36:11.565 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.565 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.565 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.565 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.565 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.565 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.824 03:02:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.083 nvme0n1 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.083 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.342 nvme0n1 00:36:12.342 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.342 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.342 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.342 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.342 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:12.601 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:12.602 03:02:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:12.602 03:02:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:12.602 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.602 03:02:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.169 nvme0n1 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.169 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.737 nvme0n1 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.737 03:02:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.306 nvme0n1 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.306 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.874 nvme0n1 00:36:14.874 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.874 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.874 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.874 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.874 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.874 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.874 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.874 03:02:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.874 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.874 03:02:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.874 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.443 nvme0n1 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNmZTBjNGEwM2U3MjE1YWQ4ODQ1OTUwMzAwYjFkOGR8anLb: 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: ]] 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzdmNmM5NjIxNGU3MzIwYWM3ZDUyNDliODZmM2Y5ZDFlOWIyNDdmOGI0ZGNhZjE4ZTZkZmQ3ZDE5MTljMzM5NbYbrqY=: 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.443 03:02:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.473 nvme0n1 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:16.473 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:16.474 03:02:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.043 nvme0n1 00:36:17.043 03:02:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.043 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.043 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.043 03:02:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.043 03:02:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.043 03:02:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTRjYTkxMGRhYzM5NjY4YTQxZmRmMzg3ZjUxYTEwZDYDtpzr: 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: ]] 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlMGQ2MTJmMzMyYzhmN2UxNjJhZThlNWYxYzgxMGRDF+NR: 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:17.302 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.303 03:02:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.242 nvme0n1 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc2MWQ2NGQzY2FjZGVlMzQ2ZGU0MzZiNGYzOGYzNzk1NzdhYzM0ODYwODY0NWJjryEO+g==: 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: ]] 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTY1YzE5NWQ4MjA1MDY1NjdmODc3YTk1MTU1Y2UxOWM+tA6Y: 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.242 03:02:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.811 nvme0n1 00:36:18.811 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.811 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.811 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.811 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.811 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.811 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWI0MDNiMGJlNGQwMWM2ODYwNjhkODZmMGU5NjI0ZDc3YTIyNzU0ZDRlNzAxZjhmMjI4ZWY4ZTgyNWExMDlmMzmlNUQ=: 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:19.070 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.006 nvme0n1 00:36:20.006 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.006 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.006 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.006 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.006 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.006 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.006 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.006 03:02:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.006 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.006 03:02:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTNjOGU2MmIzMDgzMDY3MzY4ZWVlMzVlODlhOTcwN2Q4NmZmYmQ2MDlhYTViOTZkwaJ9Yw==: 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: ]] 00:36:20.006 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjliYTM1ZmY4ZWE2NmZiMDE3ZWM1OWM5Nzc5MzhhNGU1MTlkNWRjZjkzNjY1MjljXK8HIg==: 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.007 request: 00:36:20.007 { 00:36:20.007 "name": "nvme0", 00:36:20.007 "trtype": "rdma", 00:36:20.007 "traddr": "192.168.100.8", 00:36:20.007 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:20.007 "adrfam": "ipv4", 00:36:20.007 "trsvcid": "4420", 00:36:20.007 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:20.007 "method": "bdev_nvme_attach_controller", 00:36:20.007 "req_id": 1 00:36:20.007 } 00:36:20.007 Got JSON-RPC error response 00:36:20.007 response: 00:36:20.007 { 00:36:20.007 "code": -32602, 00:36:20.007 "message": "Invalid parameters" 00:36:20.007 } 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.007 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.267 request: 00:36:20.267 { 00:36:20.267 "name": "nvme0", 00:36:20.267 "trtype": "rdma", 00:36:20.267 "traddr": "192.168.100.8", 00:36:20.267 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:20.267 "adrfam": "ipv4", 00:36:20.267 "trsvcid": "4420", 00:36:20.267 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:20.267 "dhchap_key": "key2", 00:36:20.267 "method": "bdev_nvme_attach_controller", 00:36:20.267 "req_id": 1 00:36:20.267 } 00:36:20.267 Got JSON-RPC error response 00:36:20.267 response: 00:36:20.267 { 00:36:20.267 "code": -32602, 00:36:20.267 "message": "Invalid parameters" 00:36:20.267 } 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.267 request: 00:36:20.267 { 00:36:20.267 "name": "nvme0", 00:36:20.267 "trtype": "rdma", 00:36:20.267 "traddr": "192.168.100.8", 00:36:20.267 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:20.267 "adrfam": "ipv4", 00:36:20.267 "trsvcid": "4420", 00:36:20.267 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:20.267 "dhchap_key": "key1", 00:36:20.267 "dhchap_ctrlr_key": "ckey2", 00:36:20.267 "method": "bdev_nvme_attach_controller", 00:36:20.267 "req_id": 1 00:36:20.267 } 00:36:20.267 Got JSON-RPC error response 00:36:20.267 response: 00:36:20.267 { 00:36:20.267 "code": -32602, 00:36:20.267 "message": "Invalid parameters" 00:36:20.267 } 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:20.267 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:36:20.267 rmmod nvme_rdma 00:36:20.526 rmmod nvme_fabrics 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 994398 ']' 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 994398 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' -z 994398 ']' 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@951 -- # kill -0 994398 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # uname 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 994398 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 994398' 00:36:20.526 killing process with pid 994398 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@966 -- # kill 994398 00:36:20.526 03:02:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@971 -- # wait 994398 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:36:20.786 03:02:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:36:24.077 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:36:24.077 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:24.077 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:24.336 03:02:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Bbg /tmp/spdk.key-null.k5j /tmp/spdk.key-sha256.IVL /tmp/spdk.key-sha384.WuR /tmp/spdk.key-sha512.j6Y /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:36:24.336 03:02:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:36:27.629 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:27.629 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:36:27.629 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:27.629 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:27.629 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:27.629 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:27.630 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:27.630 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:27.630 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:27.630 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:27.630 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:27.630 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:27.630 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:27.630 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:27.630 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:27.630 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:27.630 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:27.630 00:36:27.630 real 1m1.433s 00:36:27.630 user 0m50.248s 00:36:27.630 sys 0m15.397s 00:36:27.630 03:02:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:27.630 03:02:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.630 ************************************ 00:36:27.630 END TEST nvmf_auth_host 00:36:27.630 ************************************ 00:36:27.630 03:02:30 nvmf_rdma -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:36:27.630 03:02:30 nvmf_rdma -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:36:27.630 03:02:30 nvmf_rdma -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:36:27.630 03:02:30 nvmf_rdma -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:36:27.630 03:02:30 nvmf_rdma -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:36:27.630 03:02:30 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:27.630 03:02:30 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:27.630 03:02:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:27.630 ************************************ 00:36:27.630 START TEST nvmf_bdevperf 00:36:27.630 ************************************ 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:36:27.630 * Looking for test storage... 00:36:27.630 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:36:27.630 03:02:30 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:36:34.200 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:36:34.200 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:36:34.201 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:36:34.201 Found net devices under 0000:18:00.0: mlx_0_0 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:36:34.201 Found net devices under 0000:18:00.1: mlx_0_1 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:36:34.201 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:34.201 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:36:34.201 altname enp24s0f0np0 00:36:34.201 altname ens785f0np0 00:36:34.201 inet 192.168.100.8/24 scope global mlx_0_0 00:36:34.201 valid_lft forever preferred_lft forever 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:36:34.201 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:34.201 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:36:34.201 altname enp24s0f1np1 00:36:34.201 altname ens785f1np1 00:36:34.201 inet 192.168.100.9/24 scope global mlx_0_1 00:36:34.201 valid_lft forever preferred_lft forever 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:36:34.201 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:36:34.202 192.168.100.9' 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:36:34.202 192.168.100.9' 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:36:34.202 192.168.100.9' 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1007087 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1007087 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 1007087 ']' 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:34.202 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:34.202 [2024-05-15 03:02:37.360764] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:34.202 [2024-05-15 03:02:37.360845] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:34.202 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.202 [2024-05-15 03:02:37.464466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:34.461 [2024-05-15 03:02:37.516892] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:34.461 [2024-05-15 03:02:37.516951] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:34.461 [2024-05-15 03:02:37.516966] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:34.461 [2024-05-15 03:02:37.516979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:34.461 [2024-05-15 03:02:37.516990] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:34.461 [2024-05-15 03:02:37.517103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:34.461 [2024-05-15 03:02:37.517205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:34.461 [2024-05-15 03:02:37.517206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.461 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:34.461 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:36:34.461 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:34.461 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:34.461 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:34.461 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.461 03:02:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:36:34.461 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.461 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:34.461 [2024-05-15 03:02:37.697695] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2476560/0x247aa50) succeed. 00:36:34.461 [2024-05-15 03:02:37.712584] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2477b00/0x24bc0e0) succeed. 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:34.721 Malloc0 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:34.721 [2024-05-15 03:02:37.884203] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:34.721 [2024-05-15 03:02:37.884577] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:34.721 { 00:36:34.721 "params": { 00:36:34.721 "name": "Nvme$subsystem", 00:36:34.721 "trtype": "$TEST_TRANSPORT", 00:36:34.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:34.721 "adrfam": "ipv4", 00:36:34.721 "trsvcid": "$NVMF_PORT", 00:36:34.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:34.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:34.721 "hdgst": ${hdgst:-false}, 00:36:34.721 "ddgst": ${ddgst:-false} 00:36:34.721 }, 00:36:34.721 "method": "bdev_nvme_attach_controller" 00:36:34.721 } 00:36:34.721 EOF 00:36:34.721 )") 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:34.721 03:02:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:34.721 "params": { 00:36:34.721 "name": "Nvme1", 00:36:34.721 "trtype": "rdma", 00:36:34.721 "traddr": "192.168.100.8", 00:36:34.721 "adrfam": "ipv4", 00:36:34.721 "trsvcid": "4420", 00:36:34.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:34.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:34.721 "hdgst": false, 00:36:34.721 "ddgst": false 00:36:34.721 }, 00:36:34.721 "method": "bdev_nvme_attach_controller" 00:36:34.721 }' 00:36:34.721 [2024-05-15 03:02:37.939789] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:34.721 [2024-05-15 03:02:37.939859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007119 ] 00:36:34.721 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.980 [2024-05-15 03:02:38.049837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.980 [2024-05-15 03:02:38.097007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.239 Running I/O for 1 seconds... 00:36:36.177 00:36:36.177 Latency(us) 00:36:36.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.178 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:36.178 Verification LBA range: start 0x0 length 0x4000 00:36:36.178 Nvme1n1 : 1.01 12449.71 48.63 0.00 0.00 10210.84 4017.64 13620.09 00:36:36.178 =================================================================================================================== 00:36:36.178 Total : 12449.71 48.63 0.00 0.00 10210.84 4017.64 13620.09 00:36:36.437 03:02:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1007316 00:36:36.437 03:02:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:36.437 03:02:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:36.437 03:02:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:36.437 03:02:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:36.437 03:02:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:36.437 03:02:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:36.437 03:02:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:36.437 { 00:36:36.437 "params": { 00:36:36.437 "name": "Nvme$subsystem", 00:36:36.437 "trtype": "$TEST_TRANSPORT", 00:36:36.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:36.437 "adrfam": "ipv4", 00:36:36.437 "trsvcid": "$NVMF_PORT", 00:36:36.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:36.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:36.437 "hdgst": ${hdgst:-false}, 00:36:36.437 "ddgst": ${ddgst:-false} 00:36:36.437 }, 00:36:36.437 "method": "bdev_nvme_attach_controller" 00:36:36.437 } 00:36:36.437 EOF 00:36:36.437 )") 00:36:36.437 03:02:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:36.437 03:02:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:36.437 03:02:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:36.437 03:02:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:36.437 "params": { 00:36:36.437 "name": "Nvme1", 00:36:36.437 "trtype": "rdma", 00:36:36.437 "traddr": "192.168.100.8", 00:36:36.437 "adrfam": "ipv4", 00:36:36.437 "trsvcid": "4420", 00:36:36.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:36.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:36.437 "hdgst": false, 00:36:36.437 "ddgst": false 00:36:36.437 }, 00:36:36.437 "method": "bdev_nvme_attach_controller" 00:36:36.437 }' 00:36:36.437 [2024-05-15 03:02:39.573238] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:36.438 [2024-05-15 03:02:39.573312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007316 ] 00:36:36.438 EAL: No free 2048 kB hugepages reported on node 1 00:36:36.438 [2024-05-15 03:02:39.667736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:36.438 [2024-05-15 03:02:39.715946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.697 Running I/O for 15 seconds... 00:36:39.259 03:02:42 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1007087 00:36:39.259 03:02:42 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:40.661 [2024-05-15 03:02:43.563667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.563722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.563751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.563767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.563784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.563804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.563820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.563835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.563852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.563865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.563883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.563903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.563919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.563934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.563950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.563966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.563983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.563998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182700 00:36:40.661 [2024-05-15 03:02:43.564492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.661 [2024-05-15 03:02:43.564509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.564982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.564995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.662 [2024-05-15 03:02:43.565648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182700 00:36:40.662 [2024-05-15 03:02:43.565662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.565678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.565692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.565707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.565722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.565738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.565751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.565767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.565782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.565798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.565811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.565827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.565843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.565860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.565873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.565889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.565911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.565928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.565943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.565960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.565974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.565990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.663 [2024-05-15 03:02:43.566807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182700 00:36:40.663 [2024-05-15 03:02:43.566823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.566840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.566855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.566871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.566885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.566907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.566921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.566938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.566951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.566968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.566982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182700 00:36:40.664 [2024-05-15 03:02:43.567557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.664 [2024-05-15 03:02:43.567588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.567604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.664 [2024-05-15 03:02:43.567618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.569598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:40.664 [2024-05-15 03:02:43.569618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:40.664 [2024-05-15 03:02:43.569631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46096 len:8 PRP1 0x0 PRP2 0x0 00:36:40.664 [2024-05-15 03:02:43.569646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:40.664 [2024-05-15 03:02:43.569697] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:36:40.664 [2024-05-15 03:02:43.573881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:40.664 [2024-05-15 03:02:43.594844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:36:40.664 [2024-05-15 03:02:43.598519] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:40.664 [2024-05-15 03:02:43.598547] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:40.664 [2024-05-15 03:02:43.598570] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:36:41.637 [2024-05-15 03:02:44.602549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:36:41.637 [2024-05-15 03:02:44.602611] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:41.637 [2024-05-15 03:02:44.603234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:41.637 [2024-05-15 03:02:44.603275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:41.637 [2024-05-15 03:02:44.603308] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:36:41.637 [2024-05-15 03:02:44.603684] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:41.637 [2024-05-15 03:02:44.607579] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:41.637 [2024-05-15 03:02:44.618258] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:41.637 [2024-05-15 03:02:44.621776] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:41.637 [2024-05-15 03:02:44.621804] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:41.637 [2024-05-15 03:02:44.621817] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:36:42.575 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1007087 Killed "${NVMF_APP[@]}" "$@" 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1008189 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1008189 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 1008189 ']' 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:42.575 [2024-05-15 03:02:45.596462] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:42.575 [2024-05-15 03:02:45.596533] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:42.575 [2024-05-15 03:02:45.625662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:36:42.575 [2024-05-15 03:02:45.625699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:42.575 [2024-05-15 03:02:45.625971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:42.575 [2024-05-15 03:02:45.625990] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:42.575 [2024-05-15 03:02:45.626007] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:36:42.575 [2024-05-15 03:02:45.627247] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:42.575 [2024-05-15 03:02:45.630154] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:42.575 [2024-05-15 03:02:45.641812] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:42.575 [2024-05-15 03:02:45.644791] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:42.575 [2024-05-15 03:02:45.644818] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:42.575 [2024-05-15 03:02:45.644831] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:36:42.575 EAL: No free 2048 kB hugepages reported on node 1 00:36:42.575 [2024-05-15 03:02:45.700025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:42.575 [2024-05-15 03:02:45.746493] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:42.575 [2024-05-15 03:02:45.746542] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:42.575 [2024-05-15 03:02:45.746557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:42.575 [2024-05-15 03:02:45.746570] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:42.575 [2024-05-15 03:02:45.746581] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:42.575 [2024-05-15 03:02:45.746641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:42.575 [2024-05-15 03:02:45.746741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:42.575 [2024-05-15 03:02:45.746742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:42.575 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:42.835 03:02:45 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:42.835 03:02:45 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:36:42.835 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.835 03:02:45 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:42.835 [2024-05-15 03:02:45.918721] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14c1560/0x14c5a50) succeed. 00:36:42.835 [2024-05-15 03:02:45.932910] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14c2b00/0x15070e0) succeed. 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:42.835 Malloc0 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:42.835 [2024-05-15 03:02:46.085872] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:42.835 [2024-05-15 03:02:46.086223] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.835 03:02:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1007316 00:36:43.403 [2024-05-15 03:02:46.648742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:36:43.403 [2024-05-15 03:02:46.648782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:43.403 [2024-05-15 03:02:46.649054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:43.403 [2024-05-15 03:02:46.649074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:43.403 [2024-05-15 03:02:46.649089] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:36:43.403 [2024-05-15 03:02:46.650807] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:36:43.403 [2024-05-15 03:02:46.653237] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:43.403 [2024-05-15 03:02:46.665417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:43.661 [2024-05-15 03:02:46.729291] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:51.776 00:36:51.776 Latency(us) 00:36:51.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.776 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:51.776 Verification LBA range: start 0x0 length 0x4000 00:36:51.776 Nvme1n1 : 15.01 10393.35 40.60 6720.08 0.00 7448.95 414.94 1043105.17 00:36:51.776 =================================================================================================================== 00:36:51.776 Total : 10393.35 40.60 6720.08 0.00 7448.95 414.94 1043105.17 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:36:52.036 rmmod nvme_rdma 00:36:52.036 rmmod nvme_fabrics 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1008189 ']' 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1008189 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@947 -- # '[' -z 1008189 ']' 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@951 -- # kill -0 1008189 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # uname 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1008189 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1008189' 00:36:52.036 killing process with pid 1008189 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@966 -- # kill 1008189 00:36:52.036 [2024-05-15 03:02:55.314045] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:52.036 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@971 -- # wait 1008189 00:36:52.295 [2024-05-15 03:02:55.403747] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:36:52.556 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:52.556 03:02:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:36:52.556 00:36:52.556 real 0m24.926s 00:36:52.556 user 1m3.034s 00:36:52.556 sys 0m6.371s 00:36:52.556 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:52.556 03:02:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:52.556 ************************************ 00:36:52.556 END TEST nvmf_bdevperf 00:36:52.556 ************************************ 00:36:52.556 03:02:55 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:36:52.556 03:02:55 nvmf_rdma -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:52.556 03:02:55 nvmf_rdma -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:52.556 03:02:55 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:52.556 ************************************ 00:36:52.556 START TEST nvmf_target_disconnect 00:36:52.556 ************************************ 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:36:52.556 * Looking for test storage... 00:36:52.556 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:52.556 03:02:55 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:52.816 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:52.816 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:52.816 03:02:55 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:36:52.816 03:02:55 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:36:59.388 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:36:59.389 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:36:59.389 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:36:59.389 Found net devices under 0000:18:00.0: mlx_0_0 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:36:59.389 Found net devices under 0000:18:00.1: mlx_0_1 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:36:59.389 03:03:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:36:59.389 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:59.389 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:36:59.389 altname enp24s0f0np0 00:36:59.389 altname ens785f0np0 00:36:59.389 inet 192.168.100.8/24 scope global mlx_0_0 00:36:59.389 valid_lft forever preferred_lft forever 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:36:59.389 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:59.389 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:36:59.389 altname enp24s0f1np1 00:36:59.389 altname ens785f1np1 00:36:59.389 inet 192.168.100.9/24 scope global mlx_0_1 00:36:59.389 valid_lft forever preferred_lft forever 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:59.389 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:36:59.390 192.168.100.9' 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:36:59.390 192.168.100.9' 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:36:59.390 192.168.100.9' 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:59.390 ************************************ 00:36:59.390 START TEST nvmf_target_disconnect_tc1 00:36:59.390 ************************************ 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc1 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:36:59.390 03:03:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:59.390 EAL: No free 2048 kB hugepages reported on node 1 00:36:59.390 [2024-05-15 03:03:02.354826] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:59.390 [2024-05-15 03:03:02.354885] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:59.390 [2024-05-15 03:03:02.354908] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:37:00.329 [2024-05-15 03:03:03.358706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:37:00.329 [2024-05-15 03:03:03.358739] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:37:00.329 [2024-05-15 03:03:03.358756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:37:00.329 [2024-05-15 03:03:03.358797] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:00.329 [2024-05-15 03:03:03.358811] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:37:00.329 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:37:00.329 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:00.329 Initializing NVMe Controllers 00:37:00.329 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:37:00.329 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:37:00.329 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:37:00.329 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:37:00.329 00:37:00.329 real 0m1.180s 00:37:00.329 user 0m0.865s 00:37:00.329 sys 0m0.302s 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:00.330 ************************************ 00:37:00.330 END TEST nvmf_target_disconnect_tc1 00:37:00.330 ************************************ 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:00.330 ************************************ 00:37:00.330 START TEST nvmf_target_disconnect_tc2 00:37:00.330 ************************************ 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc2 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1012534 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1012534 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 1012534 ']' 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:00.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:37:00.330 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.330 [2024-05-15 03:03:03.524604] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:37:00.330 [2024-05-15 03:03:03.524670] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:00.330 EAL: No free 2048 kB hugepages reported on node 1 00:37:00.590 [2024-05-15 03:03:03.633009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:00.590 [2024-05-15 03:03:03.684722] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:00.590 [2024-05-15 03:03:03.684775] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:00.590 [2024-05-15 03:03:03.684789] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:00.590 [2024-05-15 03:03:03.684802] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:00.590 [2024-05-15 03:03:03.684813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:00.590 [2024-05-15 03:03:03.685392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:37:00.590 [2024-05-15 03:03:03.685480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:37:00.590 [2024-05-15 03:03:03.685581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:37:00.590 [2024-05-15 03:03:03.685580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:37:00.590 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:37:00.590 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:37:00.590 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:00.590 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:00.590 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.590 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:00.590 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:00.590 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.590 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.590 Malloc0 00:37:00.850 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.850 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:37:00.850 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.850 03:03:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.850 [2024-05-15 03:03:03.915096] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x574e70/0x581400) succeed. 00:37:00.850 [2024-05-15 03:03:03.930578] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5764b0/0x621490) succeed. 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.850 [2024-05-15 03:03:04.110620] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:00.850 [2024-05-15 03:03:04.110968] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:00.850 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1012703 00:37:00.851 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:00.851 03:03:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:37:01.110 EAL: No free 2048 kB hugepages reported on node 1 00:37:03.016 03:03:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1012534 00:37:03.016 03:03:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Read completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 Write completed with error (sct=0, sc=8) 00:37:04.396 starting I/O failed 00:37:04.396 [2024-05-15 03:03:07.355345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:04.965 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1012534 Killed "${NVMF_APP[@]}" "$@" 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1013508 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1013508 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 1013508 ']' 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:04.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:37:04.965 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:04.965 [2024-05-15 03:03:08.196681] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:37:04.965 [2024-05-15 03:03:08.196763] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:04.965 EAL: No free 2048 kB hugepages reported on node 1 00:37:05.226 [2024-05-15 03:03:08.311267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:05.226 Read completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Read completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Read completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Read completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Read completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Read completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Read completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Read completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Read completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Read completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Read completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Read completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 Write completed with error (sct=0, sc=8) 00:37:05.226 starting I/O failed 00:37:05.226 [2024-05-15 03:03:08.360379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.226 [2024-05-15 03:03:08.361206] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:05.226 [2024-05-15 03:03:08.361244] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:05.226 [2024-05-15 03:03:08.361258] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:05.226 [2024-05-15 03:03:08.361271] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:05.226 [2024-05-15 03:03:08.361282] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:05.226 [2024-05-15 03:03:08.361408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:37:05.226 [2024-05-15 03:03:08.361509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:37:05.226 [2024-05-15 03:03:08.361626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:37:05.226 [2024-05-15 03:03:08.361626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:37:05.226 [2024-05-15 03:03:08.361964] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:05.226 [2024-05-15 03:03:08.361984] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:05.226 [2024-05-15 03:03:08.362001] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:05.226 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:37:05.226 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:37:05.226 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:05.226 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:05.226 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:05.486 Malloc0 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:05.486 [2024-05-15 03:03:08.590052] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b7de70/0x1b8a400) succeed. 00:37:05.486 [2024-05-15 03:03:08.605493] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b7f4b0/0x1c2a490) succeed. 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.486 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:05.746 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.746 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:05.746 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.746 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:05.746 [2024-05-15 03:03:08.785204] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:05.746 [2024-05-15 03:03:08.785580] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:05.746 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.746 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:37:05.746 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:05.746 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:05.746 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:05.746 03:03:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1012703 00:37:06.315 [2024-05-15 03:03:09.365940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.315 qpair failed and we were unable to recover it. 00:37:06.315 [2024-05-15 03:03:09.375091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.315 [2024-05-15 03:03:09.375161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.315 [2024-05-15 03:03:09.375180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.315 [2024-05-15 03:03:09.375191] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.315 [2024-05-15 03:03:09.375200] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.315 [2024-05-15 03:03:09.385250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.315 qpair failed and we were unable to recover it. 00:37:06.315 [2024-05-15 03:03:09.395113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.315 [2024-05-15 03:03:09.395168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.315 [2024-05-15 03:03:09.395186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.315 [2024-05-15 03:03:09.395196] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.315 [2024-05-15 03:03:09.395206] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.315 [2024-05-15 03:03:09.405319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.315 qpair failed and we were unable to recover it. 00:37:06.315 [2024-05-15 03:03:09.415073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.315 [2024-05-15 03:03:09.415134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.315 [2024-05-15 03:03:09.415151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.315 [2024-05-15 03:03:09.415161] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.315 [2024-05-15 03:03:09.415170] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.315 [2024-05-15 03:03:09.425318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.315 qpair failed and we were unable to recover it. 00:37:06.315 [2024-05-15 03:03:09.435183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.315 [2024-05-15 03:03:09.435243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.315 [2024-05-15 03:03:09.435259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.315 [2024-05-15 03:03:09.435269] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.315 [2024-05-15 03:03:09.435278] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.315 [2024-05-15 03:03:09.445472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.315 qpair failed and we were unable to recover it. 00:37:06.315 [2024-05-15 03:03:09.455203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.315 [2024-05-15 03:03:09.455265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.315 [2024-05-15 03:03:09.455282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.315 [2024-05-15 03:03:09.455292] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.315 [2024-05-15 03:03:09.455301] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.315 [2024-05-15 03:03:09.465606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.315 qpair failed and we were unable to recover it. 00:37:06.315 [2024-05-15 03:03:09.475237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.315 [2024-05-15 03:03:09.475290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.315 [2024-05-15 03:03:09.475306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.315 [2024-05-15 03:03:09.475316] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.315 [2024-05-15 03:03:09.475325] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.315 [2024-05-15 03:03:09.485504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.315 qpair failed and we were unable to recover it. 00:37:06.315 [2024-05-15 03:03:09.495567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.315 [2024-05-15 03:03:09.495616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.315 [2024-05-15 03:03:09.495633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.315 [2024-05-15 03:03:09.495646] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.315 [2024-05-15 03:03:09.495656] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.315 [2024-05-15 03:03:09.505601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.315 qpair failed and we were unable to recover it. 00:37:06.315 [2024-05-15 03:03:09.515398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.315 [2024-05-15 03:03:09.515454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.315 [2024-05-15 03:03:09.515470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.315 [2024-05-15 03:03:09.515479] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.315 [2024-05-15 03:03:09.515489] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.315 [2024-05-15 03:03:09.525584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.315 qpair failed and we were unable to recover it. 00:37:06.315 [2024-05-15 03:03:09.535432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.315 [2024-05-15 03:03:09.535489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.315 [2024-05-15 03:03:09.535506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.315 [2024-05-15 03:03:09.535515] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.315 [2024-05-15 03:03:09.535524] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.316 [2024-05-15 03:03:09.545800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.316 qpair failed and we were unable to recover it. 00:37:06.316 [2024-05-15 03:03:09.555422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.316 [2024-05-15 03:03:09.555473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.316 [2024-05-15 03:03:09.555489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.316 [2024-05-15 03:03:09.555498] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.316 [2024-05-15 03:03:09.555507] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.316 [2024-05-15 03:03:09.565650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.316 qpair failed and we were unable to recover it. 00:37:06.316 [2024-05-15 03:03:09.575545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.316 [2024-05-15 03:03:09.575590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.316 [2024-05-15 03:03:09.575607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.316 [2024-05-15 03:03:09.575617] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.316 [2024-05-15 03:03:09.575626] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.316 [2024-05-15 03:03:09.585891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.316 qpair failed and we were unable to recover it. 00:37:06.316 [2024-05-15 03:03:09.595560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.316 [2024-05-15 03:03:09.595610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.316 [2024-05-15 03:03:09.595627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.316 [2024-05-15 03:03:09.595636] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.316 [2024-05-15 03:03:09.595645] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.576 [2024-05-15 03:03:09.605771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.576 qpair failed and we were unable to recover it. 00:37:06.576 [2024-05-15 03:03:09.615621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.576 [2024-05-15 03:03:09.615679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.576 [2024-05-15 03:03:09.615695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.576 [2024-05-15 03:03:09.615705] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.576 [2024-05-15 03:03:09.615714] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.576 [2024-05-15 03:03:09.626072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.576 qpair failed and we were unable to recover it. 00:37:06.576 [2024-05-15 03:03:09.635679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.576 [2024-05-15 03:03:09.635734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.576 [2024-05-15 03:03:09.635750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.576 [2024-05-15 03:03:09.635760] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.576 [2024-05-15 03:03:09.635770] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.576 [2024-05-15 03:03:09.646210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.576 qpair failed and we were unable to recover it. 00:37:06.576 [2024-05-15 03:03:09.655703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.576 [2024-05-15 03:03:09.655753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.576 [2024-05-15 03:03:09.655769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.576 [2024-05-15 03:03:09.655779] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.576 [2024-05-15 03:03:09.655787] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.576 [2024-05-15 03:03:09.666199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.576 qpair failed and we were unable to recover it. 00:37:06.576 [2024-05-15 03:03:09.675791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.576 [2024-05-15 03:03:09.675842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.576 [2024-05-15 03:03:09.675861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.576 [2024-05-15 03:03:09.675870] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.576 [2024-05-15 03:03:09.675879] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.576 [2024-05-15 03:03:09.686064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.576 qpair failed and we were unable to recover it. 00:37:06.576 [2024-05-15 03:03:09.696008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.576 [2024-05-15 03:03:09.696056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.576 [2024-05-15 03:03:09.696072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.576 [2024-05-15 03:03:09.696081] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.576 [2024-05-15 03:03:09.696090] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.576 [2024-05-15 03:03:09.706263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.576 qpair failed and we were unable to recover it. 00:37:06.576 [2024-05-15 03:03:09.715876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.576 [2024-05-15 03:03:09.715936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.576 [2024-05-15 03:03:09.715953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.576 [2024-05-15 03:03:09.715962] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.576 [2024-05-15 03:03:09.715971] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.576 [2024-05-15 03:03:09.726291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.576 qpair failed and we were unable to recover it. 00:37:06.576 [2024-05-15 03:03:09.736032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.576 [2024-05-15 03:03:09.736086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.576 [2024-05-15 03:03:09.736102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.576 [2024-05-15 03:03:09.736111] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.576 [2024-05-15 03:03:09.736120] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.576 [2024-05-15 03:03:09.746315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.576 qpair failed and we were unable to recover it. 00:37:06.576 [2024-05-15 03:03:09.755942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.576 [2024-05-15 03:03:09.755997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.576 [2024-05-15 03:03:09.756013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.576 [2024-05-15 03:03:09.756022] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.576 [2024-05-15 03:03:09.756034] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.576 [2024-05-15 03:03:09.766489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.576 qpair failed and we were unable to recover it. 00:37:06.576 [2024-05-15 03:03:09.776142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.576 [2024-05-15 03:03:09.776198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.576 [2024-05-15 03:03:09.776214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.577 [2024-05-15 03:03:09.776223] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.577 [2024-05-15 03:03:09.776232] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.577 [2024-05-15 03:03:09.786638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.577 qpair failed and we were unable to recover it. 00:37:06.577 [2024-05-15 03:03:09.796202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.577 [2024-05-15 03:03:09.796259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.577 [2024-05-15 03:03:09.796274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.577 [2024-05-15 03:03:09.796284] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.577 [2024-05-15 03:03:09.796293] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.577 [2024-05-15 03:03:09.806611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.577 qpair failed and we were unable to recover it. 00:37:06.577 [2024-05-15 03:03:09.816341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.577 [2024-05-15 03:03:09.816392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.577 [2024-05-15 03:03:09.816408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.577 [2024-05-15 03:03:09.816418] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.577 [2024-05-15 03:03:09.816427] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.577 [2024-05-15 03:03:09.826542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.577 qpair failed and we were unable to recover it. 00:37:06.577 [2024-05-15 03:03:09.836202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.577 [2024-05-15 03:03:09.836261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.577 [2024-05-15 03:03:09.836278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.577 [2024-05-15 03:03:09.836287] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.577 [2024-05-15 03:03:09.836296] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.577 [2024-05-15 03:03:09.846410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.577 qpair failed and we were unable to recover it. 00:37:06.577 [2024-05-15 03:03:09.856428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.577 [2024-05-15 03:03:09.856479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.577 [2024-05-15 03:03:09.856495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.577 [2024-05-15 03:03:09.856504] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.577 [2024-05-15 03:03:09.856513] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.837 [2024-05-15 03:03:09.866889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.837 qpair failed and we were unable to recover it. 00:37:06.837 [2024-05-15 03:03:09.876395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.837 [2024-05-15 03:03:09.876443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.837 [2024-05-15 03:03:09.876459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.837 [2024-05-15 03:03:09.876468] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.837 [2024-05-15 03:03:09.876477] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.837 [2024-05-15 03:03:09.886804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.837 qpair failed and we were unable to recover it. 00:37:06.837 [2024-05-15 03:03:09.896664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.837 [2024-05-15 03:03:09.896710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.837 [2024-05-15 03:03:09.896726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.837 [2024-05-15 03:03:09.896735] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.837 [2024-05-15 03:03:09.896744] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.837 [2024-05-15 03:03:09.906858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.837 qpair failed and we were unable to recover it. 00:37:06.837 [2024-05-15 03:03:09.916475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.837 [2024-05-15 03:03:09.916526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.837 [2024-05-15 03:03:09.916542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.837 [2024-05-15 03:03:09.916551] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.837 [2024-05-15 03:03:09.916560] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.837 [2024-05-15 03:03:09.926979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.837 qpair failed and we were unable to recover it. 00:37:06.837 [2024-05-15 03:03:09.936652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.837 [2024-05-15 03:03:09.936709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.837 [2024-05-15 03:03:09.936725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.837 [2024-05-15 03:03:09.936738] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.837 [2024-05-15 03:03:09.936747] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.837 [2024-05-15 03:03:09.947023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.837 qpair failed and we were unable to recover it. 00:37:06.837 [2024-05-15 03:03:09.956597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.837 [2024-05-15 03:03:09.956641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.837 [2024-05-15 03:03:09.956657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.837 [2024-05-15 03:03:09.956666] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.837 [2024-05-15 03:03:09.956675] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.837 [2024-05-15 03:03:09.967008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.837 qpair failed and we were unable to recover it. 00:37:06.837 [2024-05-15 03:03:09.976740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.837 [2024-05-15 03:03:09.976783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.837 [2024-05-15 03:03:09.976798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.837 [2024-05-15 03:03:09.976808] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.837 [2024-05-15 03:03:09.976817] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.837 [2024-05-15 03:03:09.986979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.837 qpair failed and we were unable to recover it. 00:37:06.837 [2024-05-15 03:03:09.996879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.837 [2024-05-15 03:03:09.996935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.837 [2024-05-15 03:03:09.996950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.837 [2024-05-15 03:03:09.996960] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.837 [2024-05-15 03:03:09.996969] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.837 [2024-05-15 03:03:10.007154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.837 qpair failed and we were unable to recover it. 00:37:06.837 [2024-05-15 03:03:10.016921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.837 [2024-05-15 03:03:10.016979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.837 [2024-05-15 03:03:10.016996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.837 [2024-05-15 03:03:10.017007] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.837 [2024-05-15 03:03:10.017016] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.837 [2024-05-15 03:03:10.027266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.837 qpair failed and we were unable to recover it. 00:37:06.837 [2024-05-15 03:03:10.036909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.837 [2024-05-15 03:03:10.036966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.837 [2024-05-15 03:03:10.036986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.837 [2024-05-15 03:03:10.036997] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.837 [2024-05-15 03:03:10.037006] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.837 [2024-05-15 03:03:10.047353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.837 qpair failed and we were unable to recover it. 00:37:06.837 [2024-05-15 03:03:10.056932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.838 [2024-05-15 03:03:10.056980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.838 [2024-05-15 03:03:10.056997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.838 [2024-05-15 03:03:10.057007] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.838 [2024-05-15 03:03:10.057016] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.838 [2024-05-15 03:03:10.067311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.838 qpair failed and we were unable to recover it. 00:37:06.838 [2024-05-15 03:03:10.076988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.838 [2024-05-15 03:03:10.077052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.838 [2024-05-15 03:03:10.077069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.838 [2024-05-15 03:03:10.077080] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.838 [2024-05-15 03:03:10.077089] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.838 [2024-05-15 03:03:10.087297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.838 qpair failed and we were unable to recover it. 00:37:06.838 [2024-05-15 03:03:10.097074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.838 [2024-05-15 03:03:10.097140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.838 [2024-05-15 03:03:10.097156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.838 [2024-05-15 03:03:10.097166] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.838 [2024-05-15 03:03:10.097175] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:06.838 [2024-05-15 03:03:10.107373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.838 qpair failed and we were unable to recover it. 00:37:06.838 [2024-05-15 03:03:10.117211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.838 [2024-05-15 03:03:10.117264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.838 [2024-05-15 03:03:10.117284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.838 [2024-05-15 03:03:10.117293] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.838 [2024-05-15 03:03:10.117302] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.127562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.137193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.137238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.137254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.137263] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.137272] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.147499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.157302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.157354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.157369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.157378] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.157387] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.167735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.177291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.177340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.177356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.177366] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.177374] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.187723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.197339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.197395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.197411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.197420] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.197432] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.208193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.217360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.217404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.217419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.217429] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.217438] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.227898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.237607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.237658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.237673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.237682] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.237692] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.248038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.257528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.257583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.257598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.257607] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.257616] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.267942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.277683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.277735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.277751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.277760] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.277769] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.288078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.297679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.297724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.297740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.297750] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.297759] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.307961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.317783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.317834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.317850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.317859] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.317868] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.328208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.337758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.337805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.337821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.337831] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.337839] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.348246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.357918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.357965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.357982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.357992] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.358000] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.098 [2024-05-15 03:03:10.368277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.098 qpair failed and we were unable to recover it. 00:37:07.098 [2024-05-15 03:03:10.377784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.098 [2024-05-15 03:03:10.377833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.098 [2024-05-15 03:03:10.377849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.098 [2024-05-15 03:03:10.377862] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.098 [2024-05-15 03:03:10.377871] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.359 [2024-05-15 03:03:10.388367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.359 qpair failed and we were unable to recover it. 00:37:07.359 [2024-05-15 03:03:10.398041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.359 [2024-05-15 03:03:10.398089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.359 [2024-05-15 03:03:10.398105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.359 [2024-05-15 03:03:10.398115] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.359 [2024-05-15 03:03:10.398124] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.359 [2024-05-15 03:03:10.408511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.359 qpair failed and we were unable to recover it. 00:37:07.359 [2024-05-15 03:03:10.418056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.359 [2024-05-15 03:03:10.418108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.359 [2024-05-15 03:03:10.418124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.359 [2024-05-15 03:03:10.418133] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.359 [2024-05-15 03:03:10.418143] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.359 [2024-05-15 03:03:10.428501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.359 qpair failed and we were unable to recover it. 00:37:07.359 [2024-05-15 03:03:10.438109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.359 [2024-05-15 03:03:10.438170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.359 [2024-05-15 03:03:10.438185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.359 [2024-05-15 03:03:10.438195] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.359 [2024-05-15 03:03:10.438204] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.359 [2024-05-15 03:03:10.448588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.359 qpair failed and we were unable to recover it. 00:37:07.359 [2024-05-15 03:03:10.458247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.359 [2024-05-15 03:03:10.458296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.359 [2024-05-15 03:03:10.458312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.359 [2024-05-15 03:03:10.458321] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.359 [2024-05-15 03:03:10.458330] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.359 [2024-05-15 03:03:10.468568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.359 qpair failed and we were unable to recover it. 00:37:07.359 [2024-05-15 03:03:10.478223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.359 [2024-05-15 03:03:10.478274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.359 [2024-05-15 03:03:10.478290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.359 [2024-05-15 03:03:10.478300] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.359 [2024-05-15 03:03:10.478309] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.359 [2024-05-15 03:03:10.488759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.359 qpair failed and we were unable to recover it. 00:37:07.359 [2024-05-15 03:03:10.498320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.359 [2024-05-15 03:03:10.498379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.359 [2024-05-15 03:03:10.498395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.359 [2024-05-15 03:03:10.498405] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.359 [2024-05-15 03:03:10.498414] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.359 [2024-05-15 03:03:10.508736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.359 qpair failed and we were unable to recover it. 00:37:07.359 [2024-05-15 03:03:10.518331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.359 [2024-05-15 03:03:10.518376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.359 [2024-05-15 03:03:10.518391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.359 [2024-05-15 03:03:10.518401] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.359 [2024-05-15 03:03:10.518410] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.359 [2024-05-15 03:03:10.528719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.359 qpair failed and we were unable to recover it. 00:37:07.359 [2024-05-15 03:03:10.538510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.359 [2024-05-15 03:03:10.538560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.359 [2024-05-15 03:03:10.538576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.359 [2024-05-15 03:03:10.538585] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.359 [2024-05-15 03:03:10.538594] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.359 [2024-05-15 03:03:10.548873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.359 qpair failed and we were unable to recover it. 00:37:07.360 [2024-05-15 03:03:10.558523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.360 [2024-05-15 03:03:10.558573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.360 [2024-05-15 03:03:10.558592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.360 [2024-05-15 03:03:10.558602] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.360 [2024-05-15 03:03:10.558611] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.360 [2024-05-15 03:03:10.568964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.360 qpair failed and we were unable to recover it. 00:37:07.360 [2024-05-15 03:03:10.578498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.360 [2024-05-15 03:03:10.578556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.360 [2024-05-15 03:03:10.578571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.360 [2024-05-15 03:03:10.578581] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.360 [2024-05-15 03:03:10.578590] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.360 [2024-05-15 03:03:10.588788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.360 qpair failed and we were unable to recover it. 00:37:07.360 [2024-05-15 03:03:10.598527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.360 [2024-05-15 03:03:10.598582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.360 [2024-05-15 03:03:10.598598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.360 [2024-05-15 03:03:10.598608] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.360 [2024-05-15 03:03:10.598616] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.360 [2024-05-15 03:03:10.609017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.360 qpair failed and we were unable to recover it. 00:37:07.360 [2024-05-15 03:03:10.618806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.360 [2024-05-15 03:03:10.618853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.360 [2024-05-15 03:03:10.618870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.360 [2024-05-15 03:03:10.618879] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.360 [2024-05-15 03:03:10.618888] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.360 [2024-05-15 03:03:10.628994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.360 qpair failed and we were unable to recover it. 00:37:07.360 [2024-05-15 03:03:10.638651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.360 [2024-05-15 03:03:10.638701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.360 [2024-05-15 03:03:10.638717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.360 [2024-05-15 03:03:10.638726] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.360 [2024-05-15 03:03:10.638738] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.620 [2024-05-15 03:03:10.649021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.620 qpair failed and we were unable to recover it. 00:37:07.620 [2024-05-15 03:03:10.658733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.620 [2024-05-15 03:03:10.658783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.620 [2024-05-15 03:03:10.658798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.620 [2024-05-15 03:03:10.658807] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.620 [2024-05-15 03:03:10.658816] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.620 [2024-05-15 03:03:10.669127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.620 qpair failed and we were unable to recover it. 00:37:07.620 [2024-05-15 03:03:10.678804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.620 [2024-05-15 03:03:10.678861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.620 [2024-05-15 03:03:10.678876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.620 [2024-05-15 03:03:10.678886] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.620 [2024-05-15 03:03:10.678906] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.620 [2024-05-15 03:03:10.689402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.620 qpair failed and we were unable to recover it. 00:37:07.620 [2024-05-15 03:03:10.699066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.620 [2024-05-15 03:03:10.699116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.620 [2024-05-15 03:03:10.699131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.620 [2024-05-15 03:03:10.699141] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.620 [2024-05-15 03:03:10.699150] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.620 [2024-05-15 03:03:10.709397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.620 qpair failed and we were unable to recover it. 00:37:07.620 [2024-05-15 03:03:10.719008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.620 [2024-05-15 03:03:10.719059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.620 [2024-05-15 03:03:10.719074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.620 [2024-05-15 03:03:10.719084] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.620 [2024-05-15 03:03:10.719092] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.620 [2024-05-15 03:03:10.729144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.620 qpair failed and we were unable to recover it. 00:37:07.620 [2024-05-15 03:03:10.739107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.620 [2024-05-15 03:03:10.739168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.620 [2024-05-15 03:03:10.739183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.620 [2024-05-15 03:03:10.739193] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.620 [2024-05-15 03:03:10.739202] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.621 [2024-05-15 03:03:10.749523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.621 qpair failed and we were unable to recover it. 00:37:07.621 [2024-05-15 03:03:10.759077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.621 [2024-05-15 03:03:10.759132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.621 [2024-05-15 03:03:10.759147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.621 [2024-05-15 03:03:10.759157] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.621 [2024-05-15 03:03:10.759165] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.621 [2024-05-15 03:03:10.769416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.621 qpair failed and we were unable to recover it. 00:37:07.621 [2024-05-15 03:03:10.779113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.621 [2024-05-15 03:03:10.779158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.621 [2024-05-15 03:03:10.779175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.621 [2024-05-15 03:03:10.779184] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.621 [2024-05-15 03:03:10.779193] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.621 [2024-05-15 03:03:10.789637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.621 qpair failed and we were unable to recover it. 00:37:07.621 [2024-05-15 03:03:10.799347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.621 [2024-05-15 03:03:10.799398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.621 [2024-05-15 03:03:10.799414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.621 [2024-05-15 03:03:10.799423] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.621 [2024-05-15 03:03:10.799432] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.621 [2024-05-15 03:03:10.809602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.621 qpair failed and we were unable to recover it. 00:37:07.621 [2024-05-15 03:03:10.819396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.621 [2024-05-15 03:03:10.819452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.621 [2024-05-15 03:03:10.819468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.621 [2024-05-15 03:03:10.819480] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.621 [2024-05-15 03:03:10.819489] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.621 [2024-05-15 03:03:10.829744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.621 qpair failed and we were unable to recover it. 00:37:07.621 [2024-05-15 03:03:10.839283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.621 [2024-05-15 03:03:10.839338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.621 [2024-05-15 03:03:10.839353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.621 [2024-05-15 03:03:10.839363] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.621 [2024-05-15 03:03:10.839372] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.621 [2024-05-15 03:03:10.849732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.621 qpair failed and we were unable to recover it. 00:37:07.621 [2024-05-15 03:03:10.859483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.621 [2024-05-15 03:03:10.859534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.621 [2024-05-15 03:03:10.859549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.621 [2024-05-15 03:03:10.859559] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.621 [2024-05-15 03:03:10.859568] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.621 [2024-05-15 03:03:10.869861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.621 qpair failed and we were unable to recover it. 00:37:07.621 [2024-05-15 03:03:10.879472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.621 [2024-05-15 03:03:10.879534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.621 [2024-05-15 03:03:10.879550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.621 [2024-05-15 03:03:10.879559] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.621 [2024-05-15 03:03:10.879568] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.621 [2024-05-15 03:03:10.889903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.621 qpair failed and we were unable to recover it. 00:37:07.621 [2024-05-15 03:03:10.899684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.621 [2024-05-15 03:03:10.899740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.621 [2024-05-15 03:03:10.899756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.621 [2024-05-15 03:03:10.899765] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.621 [2024-05-15 03:03:10.899775] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.881 [2024-05-15 03:03:10.910648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.881 qpair failed and we were unable to recover it. 00:37:07.881 [2024-05-15 03:03:10.919577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.881 [2024-05-15 03:03:10.919635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.881 [2024-05-15 03:03:10.919652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.881 [2024-05-15 03:03:10.919663] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.881 [2024-05-15 03:03:10.919671] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.881 [2024-05-15 03:03:10.930118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.881 qpair failed and we were unable to recover it. 00:37:07.881 [2024-05-15 03:03:10.939754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.881 [2024-05-15 03:03:10.939799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.881 [2024-05-15 03:03:10.939816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.881 [2024-05-15 03:03:10.939826] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.881 [2024-05-15 03:03:10.939835] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.881 [2024-05-15 03:03:10.950172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.881 qpair failed and we were unable to recover it. 00:37:07.881 [2024-05-15 03:03:10.959787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.881 [2024-05-15 03:03:10.959837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.881 [2024-05-15 03:03:10.959852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.881 [2024-05-15 03:03:10.959862] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.881 [2024-05-15 03:03:10.959871] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.881 [2024-05-15 03:03:10.969989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.881 qpair failed and we were unable to recover it. 00:37:07.881 [2024-05-15 03:03:10.979791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.881 [2024-05-15 03:03:10.979843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.881 [2024-05-15 03:03:10.979859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.881 [2024-05-15 03:03:10.979869] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.881 [2024-05-15 03:03:10.979878] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.881 [2024-05-15 03:03:10.989920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.881 qpair failed and we were unable to recover it. 00:37:07.881 [2024-05-15 03:03:10.999916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.881 [2024-05-15 03:03:10.999969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.881 [2024-05-15 03:03:10.999989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.881 [2024-05-15 03:03:10.999999] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.881 [2024-05-15 03:03:11.000007] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.881 [2024-05-15 03:03:11.010059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.881 qpair failed and we were unable to recover it. 00:37:07.881 [2024-05-15 03:03:11.019996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.882 [2024-05-15 03:03:11.020046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.882 [2024-05-15 03:03:11.020062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.882 [2024-05-15 03:03:11.020071] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.882 [2024-05-15 03:03:11.020080] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.882 [2024-05-15 03:03:11.030235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.882 qpair failed and we were unable to recover it. 00:37:07.882 [2024-05-15 03:03:11.039906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.882 [2024-05-15 03:03:11.039955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.882 [2024-05-15 03:03:11.039971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.882 [2024-05-15 03:03:11.039980] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.882 [2024-05-15 03:03:11.039989] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.882 [2024-05-15 03:03:11.050249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.882 qpair failed and we were unable to recover it. 00:37:07.882 [2024-05-15 03:03:11.059981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.882 [2024-05-15 03:03:11.060039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.882 [2024-05-15 03:03:11.060055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.882 [2024-05-15 03:03:11.060064] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.882 [2024-05-15 03:03:11.060073] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.882 [2024-05-15 03:03:11.070375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.882 qpair failed and we were unable to recover it. 00:37:07.882 [2024-05-15 03:03:11.080081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.882 [2024-05-15 03:03:11.080135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.882 [2024-05-15 03:03:11.080151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.882 [2024-05-15 03:03:11.080160] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.882 [2024-05-15 03:03:11.080172] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.882 [2024-05-15 03:03:11.090481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.882 qpair failed and we were unable to recover it. 00:37:07.882 [2024-05-15 03:03:11.100053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.882 [2024-05-15 03:03:11.100103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.882 [2024-05-15 03:03:11.100119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.882 [2024-05-15 03:03:11.100128] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.882 [2024-05-15 03:03:11.100137] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.882 [2024-05-15 03:03:11.110567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.882 qpair failed and we were unable to recover it. 00:37:07.882 [2024-05-15 03:03:11.120239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.882 [2024-05-15 03:03:11.120291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.882 [2024-05-15 03:03:11.120307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.882 [2024-05-15 03:03:11.120316] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.882 [2024-05-15 03:03:11.120325] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.882 [2024-05-15 03:03:11.130504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.882 qpair failed and we were unable to recover it. 00:37:07.882 [2024-05-15 03:03:11.140269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.882 [2024-05-15 03:03:11.140330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.882 [2024-05-15 03:03:11.140345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.882 [2024-05-15 03:03:11.140355] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.882 [2024-05-15 03:03:11.140364] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:07.882 [2024-05-15 03:03:11.150642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.882 qpair failed and we were unable to recover it. 00:37:07.882 [2024-05-15 03:03:11.160304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.882 [2024-05-15 03:03:11.160356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.882 [2024-05-15 03:03:11.160371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.882 [2024-05-15 03:03:11.160381] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.882 [2024-05-15 03:03:11.160390] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.142 [2024-05-15 03:03:11.170696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.142 qpair failed and we were unable to recover it. 00:37:08.142 [2024-05-15 03:03:11.180343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.142 [2024-05-15 03:03:11.180397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.142 [2024-05-15 03:03:11.180414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.142 [2024-05-15 03:03:11.180423] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.142 [2024-05-15 03:03:11.180431] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.142 [2024-05-15 03:03:11.190770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.142 qpair failed and we were unable to recover it. 00:37:08.142 [2024-05-15 03:03:11.200524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.142 [2024-05-15 03:03:11.200577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.142 [2024-05-15 03:03:11.200593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.142 [2024-05-15 03:03:11.200602] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.142 [2024-05-15 03:03:11.200611] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.142 [2024-05-15 03:03:11.210764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.142 qpair failed and we were unable to recover it. 00:37:08.142 [2024-05-15 03:03:11.220487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.142 [2024-05-15 03:03:11.220543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.142 [2024-05-15 03:03:11.220559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.142 [2024-05-15 03:03:11.220569] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.142 [2024-05-15 03:03:11.220577] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.142 [2024-05-15 03:03:11.230964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.142 qpair failed and we were unable to recover it. 00:37:08.142 [2024-05-15 03:03:11.240470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.142 [2024-05-15 03:03:11.240523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.143 [2024-05-15 03:03:11.240539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.143 [2024-05-15 03:03:11.240549] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.143 [2024-05-15 03:03:11.240558] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.143 [2024-05-15 03:03:11.250974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.143 qpair failed and we were unable to recover it. 00:37:08.143 [2024-05-15 03:03:11.260539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.143 [2024-05-15 03:03:11.260590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.143 [2024-05-15 03:03:11.260606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.143 [2024-05-15 03:03:11.260619] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.143 [2024-05-15 03:03:11.260627] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.143 [2024-05-15 03:03:11.270844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.143 qpair failed and we were unable to recover it. 00:37:08.143 [2024-05-15 03:03:11.280718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.143 [2024-05-15 03:03:11.280770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.143 [2024-05-15 03:03:11.280787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.143 [2024-05-15 03:03:11.280796] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.143 [2024-05-15 03:03:11.280805] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.143 [2024-05-15 03:03:11.290949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.143 qpair failed and we were unable to recover it. 00:37:08.143 [2024-05-15 03:03:11.300621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.143 [2024-05-15 03:03:11.300670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.143 [2024-05-15 03:03:11.300685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.143 [2024-05-15 03:03:11.300694] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.143 [2024-05-15 03:03:11.300703] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.143 [2024-05-15 03:03:11.311042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.143 qpair failed and we were unable to recover it. 00:37:08.143 [2024-05-15 03:03:11.320757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.143 [2024-05-15 03:03:11.320802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.143 [2024-05-15 03:03:11.320818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.143 [2024-05-15 03:03:11.320827] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.143 [2024-05-15 03:03:11.320836] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.143 [2024-05-15 03:03:11.331249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.143 qpair failed and we were unable to recover it. 00:37:08.143 [2024-05-15 03:03:11.340805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.143 [2024-05-15 03:03:11.340849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.143 [2024-05-15 03:03:11.340865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.143 [2024-05-15 03:03:11.340874] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.143 [2024-05-15 03:03:11.340883] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.143 [2024-05-15 03:03:11.351276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.143 qpair failed and we were unable to recover it. 00:37:08.143 [2024-05-15 03:03:11.360885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.143 [2024-05-15 03:03:11.360942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.143 [2024-05-15 03:03:11.360958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.143 [2024-05-15 03:03:11.360967] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.143 [2024-05-15 03:03:11.360976] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.143 [2024-05-15 03:03:11.371318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.143 qpair failed and we were unable to recover it. 00:37:08.143 [2024-05-15 03:03:11.380921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.143 [2024-05-15 03:03:11.380977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.143 [2024-05-15 03:03:11.380993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.143 [2024-05-15 03:03:11.381002] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.143 [2024-05-15 03:03:11.381011] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.143 [2024-05-15 03:03:11.391356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.143 qpair failed and we were unable to recover it. 00:37:08.143 [2024-05-15 03:03:11.400982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.143 [2024-05-15 03:03:11.401034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.143 [2024-05-15 03:03:11.401049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.143 [2024-05-15 03:03:11.401059] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.143 [2024-05-15 03:03:11.401068] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.143 [2024-05-15 03:03:11.411505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.143 qpair failed and we were unable to recover it. 00:37:08.143 [2024-05-15 03:03:11.421056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.143 [2024-05-15 03:03:11.421106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.143 [2024-05-15 03:03:11.421122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.143 [2024-05-15 03:03:11.421131] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.143 [2024-05-15 03:03:11.421140] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.403 [2024-05-15 03:03:11.431651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.441123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.441173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.441192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.441201] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.441210] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.451546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.461131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.461193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.461208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.461218] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.461227] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.471554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.481193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.481247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.481263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.481273] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.481282] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.491687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.501355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.501405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.501420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.501430] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.501439] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.511559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.521251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.521304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.521319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.521328] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.521340] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.531822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.541446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.541494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.541510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.541519] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.541528] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.551953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.561419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.561472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.561489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.561499] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.561508] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.571812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.581531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.581582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.581599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.581608] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.581617] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.591763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.601563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.601617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.601633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.601642] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.601650] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.611942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.621650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.621701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.621716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.621725] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.621734] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.632005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.641651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.641701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.641716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.641725] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.641734] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.652046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.661765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.661815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.661830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.661840] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.661849] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.672083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.404 [2024-05-15 03:03:11.681883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.404 [2024-05-15 03:03:11.681936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.404 [2024-05-15 03:03:11.681952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.404 [2024-05-15 03:03:11.681962] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.404 [2024-05-15 03:03:11.681971] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.404 [2024-05-15 03:03:11.692131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.404 qpair failed and we were unable to recover it. 00:37:08.665 [2024-05-15 03:03:11.701932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.665 [2024-05-15 03:03:11.701988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.665 [2024-05-15 03:03:11.702004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.665 [2024-05-15 03:03:11.702018] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.665 [2024-05-15 03:03:11.702027] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.665 [2024-05-15 03:03:11.712314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.665 qpair failed and we were unable to recover it. 00:37:08.665 [2024-05-15 03:03:11.721952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.665 [2024-05-15 03:03:11.722007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.665 [2024-05-15 03:03:11.722023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.665 [2024-05-15 03:03:11.722032] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.665 [2024-05-15 03:03:11.722041] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.665 [2024-05-15 03:03:11.732266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.665 qpair failed and we were unable to recover it. 00:37:08.665 [2024-05-15 03:03:11.742054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.665 [2024-05-15 03:03:11.742099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.665 [2024-05-15 03:03:11.742115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.665 [2024-05-15 03:03:11.742125] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.665 [2024-05-15 03:03:11.742134] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.665 [2024-05-15 03:03:11.752557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.665 qpair failed and we were unable to recover it. 00:37:08.665 [2024-05-15 03:03:11.762027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.665 [2024-05-15 03:03:11.762077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.665 [2024-05-15 03:03:11.762092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.665 [2024-05-15 03:03:11.762101] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.665 [2024-05-15 03:03:11.762110] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.665 [2024-05-15 03:03:11.772268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.665 qpair failed and we were unable to recover it. 00:37:08.665 [2024-05-15 03:03:11.782165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.665 [2024-05-15 03:03:11.782221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.665 [2024-05-15 03:03:11.782238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.665 [2024-05-15 03:03:11.782247] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.665 [2024-05-15 03:03:11.782256] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.665 [2024-05-15 03:03:11.792653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.665 qpair failed and we were unable to recover it. 00:37:08.665 [2024-05-15 03:03:11.802166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.665 [2024-05-15 03:03:11.802221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.665 [2024-05-15 03:03:11.802238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.665 [2024-05-15 03:03:11.802248] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.665 [2024-05-15 03:03:11.802256] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.665 [2024-05-15 03:03:11.812672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.665 qpair failed and we were unable to recover it. 00:37:08.665 [2024-05-15 03:03:11.822256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.665 [2024-05-15 03:03:11.822306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.665 [2024-05-15 03:03:11.822322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.665 [2024-05-15 03:03:11.822331] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.665 [2024-05-15 03:03:11.822340] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.665 [2024-05-15 03:03:11.832766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.665 qpair failed and we were unable to recover it. 00:37:08.665 [2024-05-15 03:03:11.842299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.665 [2024-05-15 03:03:11.842351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.665 [2024-05-15 03:03:11.842367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.665 [2024-05-15 03:03:11.842376] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.665 [2024-05-15 03:03:11.842386] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.665 [2024-05-15 03:03:11.852669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.665 qpair failed and we were unable to recover it. 00:37:08.665 [2024-05-15 03:03:11.862362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.665 [2024-05-15 03:03:11.862415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.665 [2024-05-15 03:03:11.862431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.665 [2024-05-15 03:03:11.862440] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.665 [2024-05-15 03:03:11.862449] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.665 [2024-05-15 03:03:11.872792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.665 qpair failed and we were unable to recover it. 00:37:08.665 [2024-05-15 03:03:11.882396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.665 [2024-05-15 03:03:11.882445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.665 [2024-05-15 03:03:11.882464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.665 [2024-05-15 03:03:11.882474] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.665 [2024-05-15 03:03:11.882483] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.665 [2024-05-15 03:03:11.892817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.665 qpair failed and we were unable to recover it. 00:37:08.665 [2024-05-15 03:03:11.902541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.665 [2024-05-15 03:03:11.902587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.666 [2024-05-15 03:03:11.902603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.666 [2024-05-15 03:03:11.902612] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.666 [2024-05-15 03:03:11.902621] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.666 [2024-05-15 03:03:11.912949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.666 qpair failed and we were unable to recover it. 00:37:08.666 [2024-05-15 03:03:11.922552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.666 [2024-05-15 03:03:11.922602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.666 [2024-05-15 03:03:11.922618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.666 [2024-05-15 03:03:11.922627] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.666 [2024-05-15 03:03:11.922636] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.666 [2024-05-15 03:03:11.933032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.666 qpair failed and we were unable to recover it. 00:37:08.666 [2024-05-15 03:03:11.942596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.666 [2024-05-15 03:03:11.942649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.666 [2024-05-15 03:03:11.942665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.666 [2024-05-15 03:03:11.942674] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.666 [2024-05-15 03:03:11.942683] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.666 [2024-05-15 03:03:11.953110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.666 qpair failed and we were unable to recover it. 00:37:08.926 [2024-05-15 03:03:11.962728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.926 [2024-05-15 03:03:11.962779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.926 [2024-05-15 03:03:11.962795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.926 [2024-05-15 03:03:11.962804] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.926 [2024-05-15 03:03:11.962817] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.926 [2024-05-15 03:03:11.973090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.926 qpair failed and we were unable to recover it. 00:37:08.926 [2024-05-15 03:03:11.982667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.926 [2024-05-15 03:03:11.982709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.926 [2024-05-15 03:03:11.982725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.926 [2024-05-15 03:03:11.982735] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.926 [2024-05-15 03:03:11.982743] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.926 [2024-05-15 03:03:11.993269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.926 qpair failed and we were unable to recover it. 00:37:08.926 [2024-05-15 03:03:12.002800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.926 [2024-05-15 03:03:12.002849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.926 [2024-05-15 03:03:12.002865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.926 [2024-05-15 03:03:12.002874] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.926 [2024-05-15 03:03:12.002883] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.926 [2024-05-15 03:03:12.013176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.926 qpair failed and we were unable to recover it. 00:37:08.926 [2024-05-15 03:03:12.022809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.926 [2024-05-15 03:03:12.022868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.926 [2024-05-15 03:03:12.022884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.926 [2024-05-15 03:03:12.022893] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.926 [2024-05-15 03:03:12.022906] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.926 [2024-05-15 03:03:12.033358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.926 qpair failed and we were unable to recover it. 00:37:08.926 [2024-05-15 03:03:12.043056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.926 [2024-05-15 03:03:12.043101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.926 [2024-05-15 03:03:12.043117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.926 [2024-05-15 03:03:12.043127] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.926 [2024-05-15 03:03:12.043136] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.926 [2024-05-15 03:03:12.053265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.926 qpair failed and we were unable to recover it. 00:37:08.926 [2024-05-15 03:03:12.062865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.926 [2024-05-15 03:03:12.062919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.926 [2024-05-15 03:03:12.062935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.926 [2024-05-15 03:03:12.062945] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.926 [2024-05-15 03:03:12.062954] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.926 [2024-05-15 03:03:12.073404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.926 qpair failed and we were unable to recover it. 00:37:08.926 [2024-05-15 03:03:12.083205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.926 [2024-05-15 03:03:12.083252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.926 [2024-05-15 03:03:12.083269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.926 [2024-05-15 03:03:12.083278] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.926 [2024-05-15 03:03:12.083287] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.926 [2024-05-15 03:03:12.093228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.926 qpair failed and we were unable to recover it. 00:37:08.926 [2024-05-15 03:03:12.103109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.926 [2024-05-15 03:03:12.103158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.926 [2024-05-15 03:03:12.103173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.926 [2024-05-15 03:03:12.103183] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.926 [2024-05-15 03:03:12.103192] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.926 [2024-05-15 03:03:12.113572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.926 qpair failed and we were unable to recover it. 00:37:08.926 [2024-05-15 03:03:12.123239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.926 [2024-05-15 03:03:12.123286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.926 [2024-05-15 03:03:12.123301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.926 [2024-05-15 03:03:12.123310] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.926 [2024-05-15 03:03:12.123319] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.926 [2024-05-15 03:03:12.133687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.926 qpair failed and we were unable to recover it. 00:37:08.926 [2024-05-15 03:03:12.143202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.926 [2024-05-15 03:03:12.143249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.926 [2024-05-15 03:03:12.143264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.926 [2024-05-15 03:03:12.143281] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.926 [2024-05-15 03:03:12.143290] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.926 [2024-05-15 03:03:12.153757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.927 qpair failed and we were unable to recover it. 00:37:08.927 [2024-05-15 03:03:12.163336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.927 [2024-05-15 03:03:12.163387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.927 [2024-05-15 03:03:12.163403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.927 [2024-05-15 03:03:12.163412] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.927 [2024-05-15 03:03:12.163421] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.927 [2024-05-15 03:03:12.173745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.927 qpair failed and we were unable to recover it. 00:37:08.927 [2024-05-15 03:03:12.183445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.927 [2024-05-15 03:03:12.183496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.927 [2024-05-15 03:03:12.183513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.927 [2024-05-15 03:03:12.183522] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.927 [2024-05-15 03:03:12.183531] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.927 [2024-05-15 03:03:12.193743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.927 qpair failed and we were unable to recover it. 00:37:08.927 [2024-05-15 03:03:12.203433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.927 [2024-05-15 03:03:12.203480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.927 [2024-05-15 03:03:12.203495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.927 [2024-05-15 03:03:12.203505] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.927 [2024-05-15 03:03:12.203514] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:08.927 [2024-05-15 03:03:12.213903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.927 qpair failed and we were unable to recover it. 00:37:09.187 [2024-05-15 03:03:12.223572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.187 [2024-05-15 03:03:12.223623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.187 [2024-05-15 03:03:12.223639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.187 [2024-05-15 03:03:12.223649] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.187 [2024-05-15 03:03:12.223659] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.187 [2024-05-15 03:03:12.233890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.187 qpair failed and we were unable to recover it. 00:37:09.187 [2024-05-15 03:03:12.243603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.187 [2024-05-15 03:03:12.243656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.187 [2024-05-15 03:03:12.243672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.187 [2024-05-15 03:03:12.243681] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.187 [2024-05-15 03:03:12.243690] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.187 [2024-05-15 03:03:12.254006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.187 qpair failed and we were unable to recover it. 00:37:09.187 [2024-05-15 03:03:12.263611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.187 [2024-05-15 03:03:12.263668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.187 [2024-05-15 03:03:12.263683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.187 [2024-05-15 03:03:12.263693] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.187 [2024-05-15 03:03:12.263702] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.187 [2024-05-15 03:03:12.274074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.187 qpair failed and we were unable to recover it. 00:37:09.187 [2024-05-15 03:03:12.283764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.187 [2024-05-15 03:03:12.283810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.187 [2024-05-15 03:03:12.283826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.187 [2024-05-15 03:03:12.283835] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.187 [2024-05-15 03:03:12.283844] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.187 [2024-05-15 03:03:12.294000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.187 qpair failed and we were unable to recover it. 00:37:09.187 [2024-05-15 03:03:12.303764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.187 [2024-05-15 03:03:12.303816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.187 [2024-05-15 03:03:12.303832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.187 [2024-05-15 03:03:12.303841] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.187 [2024-05-15 03:03:12.303850] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.187 [2024-05-15 03:03:12.314202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.187 qpair failed and we were unable to recover it. 00:37:09.187 [2024-05-15 03:03:12.323808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.187 [2024-05-15 03:03:12.323858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.187 [2024-05-15 03:03:12.323877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.187 [2024-05-15 03:03:12.323887] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.187 [2024-05-15 03:03:12.323899] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.187 [2024-05-15 03:03:12.334189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.187 qpair failed and we were unable to recover it. 00:37:09.187 [2024-05-15 03:03:12.343861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.187 [2024-05-15 03:03:12.343918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.187 [2024-05-15 03:03:12.343934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.187 [2024-05-15 03:03:12.343944] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.187 [2024-05-15 03:03:12.343953] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.187 [2024-05-15 03:03:12.354301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.187 qpair failed and we were unable to recover it. 00:37:09.187 [2024-05-15 03:03:12.363925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.187 [2024-05-15 03:03:12.363975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.187 [2024-05-15 03:03:12.363992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.187 [2024-05-15 03:03:12.364001] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.187 [2024-05-15 03:03:12.364009] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.187 [2024-05-15 03:03:12.374417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.187 qpair failed and we were unable to recover it. 00:37:09.187 [2024-05-15 03:03:12.384079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.187 [2024-05-15 03:03:12.384135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.187 [2024-05-15 03:03:12.384151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.187 [2024-05-15 03:03:12.384160] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.188 [2024-05-15 03:03:12.384169] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.188 [2024-05-15 03:03:12.394495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.188 qpair failed and we were unable to recover it. 00:37:09.188 [2024-05-15 03:03:12.404229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.188 [2024-05-15 03:03:12.404283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.188 [2024-05-15 03:03:12.404299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.188 [2024-05-15 03:03:12.404308] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.188 [2024-05-15 03:03:12.404320] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.188 [2024-05-15 03:03:12.414376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.188 qpair failed and we were unable to recover it. 00:37:09.188 [2024-05-15 03:03:12.424161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.188 [2024-05-15 03:03:12.424220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.188 [2024-05-15 03:03:12.424236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.188 [2024-05-15 03:03:12.424246] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.188 [2024-05-15 03:03:12.424254] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.188 [2024-05-15 03:03:12.434593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.188 qpair failed and we were unable to recover it. 00:37:09.188 [2024-05-15 03:03:12.444306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.188 [2024-05-15 03:03:12.444353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.188 [2024-05-15 03:03:12.444369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.188 [2024-05-15 03:03:12.444378] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.188 [2024-05-15 03:03:12.444387] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.188 [2024-05-15 03:03:12.454751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.188 qpair failed and we were unable to recover it. 00:37:09.188 [2024-05-15 03:03:12.464328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.188 [2024-05-15 03:03:12.464372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.188 [2024-05-15 03:03:12.464387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.188 [2024-05-15 03:03:12.464396] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.188 [2024-05-15 03:03:12.464405] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.188 [2024-05-15 03:03:12.474787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.188 qpair failed and we were unable to recover it. 00:37:09.448 [2024-05-15 03:03:12.484398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.448 [2024-05-15 03:03:12.484451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.448 [2024-05-15 03:03:12.484468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.448 [2024-05-15 03:03:12.484478] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.448 [2024-05-15 03:03:12.484487] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.448 [2024-05-15 03:03:12.494882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.448 qpair failed and we were unable to recover it. 00:37:09.448 [2024-05-15 03:03:12.504516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.449 [2024-05-15 03:03:12.504567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.449 [2024-05-15 03:03:12.504583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.449 [2024-05-15 03:03:12.504592] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.449 [2024-05-15 03:03:12.504601] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.449 [2024-05-15 03:03:12.514845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.449 qpair failed and we were unable to recover it. 00:37:09.449 [2024-05-15 03:03:12.524477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.449 [2024-05-15 03:03:12.524528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.449 [2024-05-15 03:03:12.524544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.449 [2024-05-15 03:03:12.524553] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.449 [2024-05-15 03:03:12.524562] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.449 [2024-05-15 03:03:12.534798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.449 qpair failed and we were unable to recover it. 00:37:09.449 [2024-05-15 03:03:12.544644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.449 [2024-05-15 03:03:12.544695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.449 [2024-05-15 03:03:12.544711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.449 [2024-05-15 03:03:12.544720] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.449 [2024-05-15 03:03:12.544729] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.449 [2024-05-15 03:03:12.554864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.449 qpair failed and we were unable to recover it. 00:37:09.449 [2024-05-15 03:03:12.564719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.449 [2024-05-15 03:03:12.564772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.449 [2024-05-15 03:03:12.564788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.449 [2024-05-15 03:03:12.564798] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.449 [2024-05-15 03:03:12.564807] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.449 [2024-05-15 03:03:12.574953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.449 qpair failed and we were unable to recover it. 00:37:09.449 [2024-05-15 03:03:12.585240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.449 [2024-05-15 03:03:12.585302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.449 [2024-05-15 03:03:12.585319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.449 [2024-05-15 03:03:12.585333] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.449 [2024-05-15 03:03:12.585342] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.449 [2024-05-15 03:03:12.595061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.449 qpair failed and we were unable to recover it. 00:37:09.449 [2024-05-15 03:03:12.604759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.449 [2024-05-15 03:03:12.604811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.449 [2024-05-15 03:03:12.604827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.449 [2024-05-15 03:03:12.604836] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.449 [2024-05-15 03:03:12.604845] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.449 [2024-05-15 03:03:12.615107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.449 qpair failed and we were unable to recover it. 00:37:09.449 [2024-05-15 03:03:12.624847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.449 [2024-05-15 03:03:12.624893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.449 [2024-05-15 03:03:12.624914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.449 [2024-05-15 03:03:12.624923] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.449 [2024-05-15 03:03:12.624932] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.449 [2024-05-15 03:03:12.635078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.449 qpair failed and we were unable to recover it. 00:37:09.449 [2024-05-15 03:03:12.644870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.449 [2024-05-15 03:03:12.644926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.449 [2024-05-15 03:03:12.644942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.449 [2024-05-15 03:03:12.644952] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.449 [2024-05-15 03:03:12.644961] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.449 [2024-05-15 03:03:12.655198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.449 qpair failed and we were unable to recover it. 00:37:09.449 [2024-05-15 03:03:12.664860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.449 [2024-05-15 03:03:12.664925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.449 [2024-05-15 03:03:12.664941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.449 [2024-05-15 03:03:12.664951] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.449 [2024-05-15 03:03:12.664961] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.449 [2024-05-15 03:03:12.675226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.449 qpair failed and we were unable to recover it. 00:37:09.449 [2024-05-15 03:03:12.684949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.449 [2024-05-15 03:03:12.685005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.449 [2024-05-15 03:03:12.685021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.449 [2024-05-15 03:03:12.685030] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.449 [2024-05-15 03:03:12.685039] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.449 [2024-05-15 03:03:12.695462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.449 qpair failed and we were unable to recover it. 00:37:09.449 [2024-05-15 03:03:12.705090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.449 [2024-05-15 03:03:12.705134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.449 [2024-05-15 03:03:12.705150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.449 [2024-05-15 03:03:12.705159] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.449 [2024-05-15 03:03:12.705168] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.449 [2024-05-15 03:03:12.715378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.449 qpair failed and we were unable to recover it. 00:37:09.449 [2024-05-15 03:03:12.725060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.449 [2024-05-15 03:03:12.725111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.449 [2024-05-15 03:03:12.725127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.449 [2024-05-15 03:03:12.725136] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.449 [2024-05-15 03:03:12.725145] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.449 [2024-05-15 03:03:12.735575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.449 qpair failed and we were unable to recover it. 00:37:09.710 [2024-05-15 03:03:12.745259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.710 [2024-05-15 03:03:12.745313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.710 [2024-05-15 03:03:12.745328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.710 [2024-05-15 03:03:12.745338] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.710 [2024-05-15 03:03:12.745347] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.710 [2024-05-15 03:03:12.755625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.710 qpair failed and we were unable to recover it. 00:37:09.710 [2024-05-15 03:03:12.765238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.710 [2024-05-15 03:03:12.765285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.710 [2024-05-15 03:03:12.765306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.710 [2024-05-15 03:03:12.765315] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.710 [2024-05-15 03:03:12.765324] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.710 [2024-05-15 03:03:12.775493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.710 qpair failed and we were unable to recover it. 00:37:09.710 [2024-05-15 03:03:12.785280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.710 [2024-05-15 03:03:12.785325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.710 [2024-05-15 03:03:12.785341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.710 [2024-05-15 03:03:12.785350] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.710 [2024-05-15 03:03:12.785359] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.710 [2024-05-15 03:03:12.795626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.710 qpair failed and we were unable to recover it. 00:37:09.710 [2024-05-15 03:03:12.805431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.710 [2024-05-15 03:03:12.805481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.710 [2024-05-15 03:03:12.805496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.710 [2024-05-15 03:03:12.805506] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.710 [2024-05-15 03:03:12.805515] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.710 [2024-05-15 03:03:12.815616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.710 qpair failed and we were unable to recover it. 00:37:09.710 [2024-05-15 03:03:12.825461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.710 [2024-05-15 03:03:12.825515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.710 [2024-05-15 03:03:12.825531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.710 [2024-05-15 03:03:12.825540] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.710 [2024-05-15 03:03:12.825549] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.710 [2024-05-15 03:03:12.835791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.710 qpair failed and we were unable to recover it. 00:37:09.710 [2024-05-15 03:03:12.845422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.710 [2024-05-15 03:03:12.845474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.710 [2024-05-15 03:03:12.845490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.710 [2024-05-15 03:03:12.845500] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.710 [2024-05-15 03:03:12.845512] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.711 [2024-05-15 03:03:12.855931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.711 qpair failed and we were unable to recover it. 00:37:09.711 [2024-05-15 03:03:12.865514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.711 [2024-05-15 03:03:12.865565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.711 [2024-05-15 03:03:12.865580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.711 [2024-05-15 03:03:12.865590] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.711 [2024-05-15 03:03:12.865598] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.711 [2024-05-15 03:03:12.875934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.711 qpair failed and we were unable to recover it. 00:37:09.711 [2024-05-15 03:03:12.885586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.711 [2024-05-15 03:03:12.885642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.711 [2024-05-15 03:03:12.885657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.711 [2024-05-15 03:03:12.885667] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.711 [2024-05-15 03:03:12.885676] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.711 [2024-05-15 03:03:12.896206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.711 qpair failed and we were unable to recover it. 00:37:09.711 [2024-05-15 03:03:12.905644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.711 [2024-05-15 03:03:12.905704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.711 [2024-05-15 03:03:12.905720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.711 [2024-05-15 03:03:12.905730] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.711 [2024-05-15 03:03:12.905739] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.711 [2024-05-15 03:03:12.916052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.711 qpair failed and we were unable to recover it. 00:37:09.711 [2024-05-15 03:03:12.925640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.711 [2024-05-15 03:03:12.925683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.711 [2024-05-15 03:03:12.925699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.711 [2024-05-15 03:03:12.925708] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.711 [2024-05-15 03:03:12.925717] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.711 [2024-05-15 03:03:12.936121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.711 qpair failed and we were unable to recover it. 00:37:09.711 [2024-05-15 03:03:12.945788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.711 [2024-05-15 03:03:12.945839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.711 [2024-05-15 03:03:12.945855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.711 [2024-05-15 03:03:12.945865] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.711 [2024-05-15 03:03:12.945873] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.711 [2024-05-15 03:03:12.956224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.711 qpair failed and we were unable to recover it. 00:37:09.711 [2024-05-15 03:03:12.965794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.711 [2024-05-15 03:03:12.965846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.711 [2024-05-15 03:03:12.965863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.711 [2024-05-15 03:03:12.965872] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.711 [2024-05-15 03:03:12.965881] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.711 [2024-05-15 03:03:12.976340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.711 qpair failed and we were unable to recover it. 00:37:09.711 [2024-05-15 03:03:12.985790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.711 [2024-05-15 03:03:12.985844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.711 [2024-05-15 03:03:12.985860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.711 [2024-05-15 03:03:12.985870] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.711 [2024-05-15 03:03:12.985879] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.711 [2024-05-15 03:03:12.996326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.711 qpair failed and we were unable to recover it. 00:37:09.971 [2024-05-15 03:03:13.005934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.971 [2024-05-15 03:03:13.005986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.971 [2024-05-15 03:03:13.006003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.971 [2024-05-15 03:03:13.006012] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.006022] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.016355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:09.972 [2024-05-15 03:03:13.026045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.972 [2024-05-15 03:03:13.026090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.972 [2024-05-15 03:03:13.026105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.972 [2024-05-15 03:03:13.026118] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.026127] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.036412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:09.972 [2024-05-15 03:03:13.046028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.972 [2024-05-15 03:03:13.046079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.972 [2024-05-15 03:03:13.046095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.972 [2024-05-15 03:03:13.046104] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.046113] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.056427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:09.972 [2024-05-15 03:03:13.066110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.972 [2024-05-15 03:03:13.066166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.972 [2024-05-15 03:03:13.066182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.972 [2024-05-15 03:03:13.066191] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.066200] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.076495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:09.972 [2024-05-15 03:03:13.086159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.972 [2024-05-15 03:03:13.086205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.972 [2024-05-15 03:03:13.086221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.972 [2024-05-15 03:03:13.086230] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.086239] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.096649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:09.972 [2024-05-15 03:03:13.106289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.972 [2024-05-15 03:03:13.106342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.972 [2024-05-15 03:03:13.106357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.972 [2024-05-15 03:03:13.106367] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.106375] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.116756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:09.972 [2024-05-15 03:03:13.126337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.972 [2024-05-15 03:03:13.126388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.972 [2024-05-15 03:03:13.126403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.972 [2024-05-15 03:03:13.126413] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.126422] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.136568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:09.972 [2024-05-15 03:03:13.146415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.972 [2024-05-15 03:03:13.146469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.972 [2024-05-15 03:03:13.146484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.972 [2024-05-15 03:03:13.146493] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.146502] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.156785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:09.972 [2024-05-15 03:03:13.166412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.972 [2024-05-15 03:03:13.166469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.972 [2024-05-15 03:03:13.166485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.972 [2024-05-15 03:03:13.166494] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.166503] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.176884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:09.972 [2024-05-15 03:03:13.186482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.972 [2024-05-15 03:03:13.186531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.972 [2024-05-15 03:03:13.186547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.972 [2024-05-15 03:03:13.186557] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.186566] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.196915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:09.972 [2024-05-15 03:03:13.206535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.972 [2024-05-15 03:03:13.206585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.972 [2024-05-15 03:03:13.206603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.972 [2024-05-15 03:03:13.206613] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.206622] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.216807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:09.972 [2024-05-15 03:03:13.226616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.972 [2024-05-15 03:03:13.226672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.972 [2024-05-15 03:03:13.226687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.972 [2024-05-15 03:03:13.226697] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.226705] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.236990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:09.972 [2024-05-15 03:03:13.246571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:09.972 [2024-05-15 03:03:13.246626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:09.972 [2024-05-15 03:03:13.246642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:09.972 [2024-05-15 03:03:13.246651] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:09.972 [2024-05-15 03:03:13.246660] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:09.972 [2024-05-15 03:03:13.257190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:09.972 qpair failed and we were unable to recover it. 00:37:10.233 [2024-05-15 03:03:13.266658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.233 [2024-05-15 03:03:13.266705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.233 [2024-05-15 03:03:13.266722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.233 [2024-05-15 03:03:13.266731] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.233 [2024-05-15 03:03:13.266740] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.233 [2024-05-15 03:03:13.277121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.233 qpair failed and we were unable to recover it. 00:37:10.233 [2024-05-15 03:03:13.286697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.233 [2024-05-15 03:03:13.286750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.233 [2024-05-15 03:03:13.286766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.233 [2024-05-15 03:03:13.286776] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.233 [2024-05-15 03:03:13.286791] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.233 [2024-05-15 03:03:13.297236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.233 qpair failed and we were unable to recover it. 00:37:10.233 [2024-05-15 03:03:13.306820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.233 [2024-05-15 03:03:13.306877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.233 [2024-05-15 03:03:13.306892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.233 [2024-05-15 03:03:13.306907] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.233 [2024-05-15 03:03:13.306916] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.233 [2024-05-15 03:03:13.317078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.233 qpair failed and we were unable to recover it. 00:37:10.233 [2024-05-15 03:03:13.326828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.233 [2024-05-15 03:03:13.326878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.233 [2024-05-15 03:03:13.326905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.233 [2024-05-15 03:03:13.326915] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.233 [2024-05-15 03:03:13.326924] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.233 [2024-05-15 03:03:13.337415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.233 qpair failed and we were unable to recover it. 00:37:10.233 [2024-05-15 03:03:13.346872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.233 [2024-05-15 03:03:13.346926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.233 [2024-05-15 03:03:13.346942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.233 [2024-05-15 03:03:13.346952] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.233 [2024-05-15 03:03:13.346960] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.233 [2024-05-15 03:03:13.357379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.233 qpair failed and we were unable to recover it. 00:37:10.233 [2024-05-15 03:03:13.367007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.233 [2024-05-15 03:03:13.367066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.233 [2024-05-15 03:03:13.367081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.233 [2024-05-15 03:03:13.367091] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.233 [2024-05-15 03:03:13.367100] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.233 [2024-05-15 03:03:13.377485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.233 qpair failed and we were unable to recover it. 00:37:10.233 [2024-05-15 03:03:13.387035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.233 [2024-05-15 03:03:13.387089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.233 [2024-05-15 03:03:13.387105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.233 [2024-05-15 03:03:13.387115] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.233 [2024-05-15 03:03:13.387123] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.233 [2024-05-15 03:03:13.397568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.233 qpair failed and we were unable to recover it. 00:37:10.233 [2024-05-15 03:03:13.407187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.233 [2024-05-15 03:03:13.407235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.233 [2024-05-15 03:03:13.407250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.233 [2024-05-15 03:03:13.407259] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.233 [2024-05-15 03:03:13.407268] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.233 [2024-05-15 03:03:13.417641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.233 qpair failed and we were unable to recover it. 00:37:10.233 [2024-05-15 03:03:13.427133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.233 [2024-05-15 03:03:13.427180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.233 [2024-05-15 03:03:13.427195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.233 [2024-05-15 03:03:13.427204] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.233 [2024-05-15 03:03:13.427214] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.233 [2024-05-15 03:03:13.437692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.233 qpair failed and we were unable to recover it. 00:37:10.233 [2024-05-15 03:03:13.447332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.233 [2024-05-15 03:03:13.447380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.233 [2024-05-15 03:03:13.447396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.233 [2024-05-15 03:03:13.447406] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.233 [2024-05-15 03:03:13.447415] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.234 [2024-05-15 03:03:13.457588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.234 qpair failed and we were unable to recover it. 00:37:10.234 [2024-05-15 03:03:13.467427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.234 [2024-05-15 03:03:13.467480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.234 [2024-05-15 03:03:13.467496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.234 [2024-05-15 03:03:13.467508] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.234 [2024-05-15 03:03:13.467517] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.234 [2024-05-15 03:03:13.477687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.234 qpair failed and we were unable to recover it. 00:37:10.234 [2024-05-15 03:03:13.487318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.234 [2024-05-15 03:03:13.487367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.234 [2024-05-15 03:03:13.487384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.234 [2024-05-15 03:03:13.487393] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.234 [2024-05-15 03:03:13.487402] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.234 [2024-05-15 03:03:13.497934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.234 qpair failed and we were unable to recover it. 00:37:10.234 [2024-05-15 03:03:13.507318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.234 [2024-05-15 03:03:13.507371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.234 [2024-05-15 03:03:13.507386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.234 [2024-05-15 03:03:13.507396] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.234 [2024-05-15 03:03:13.507405] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.234 [2024-05-15 03:03:13.517776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.234 qpair failed and we were unable to recover it. 00:37:10.494 [2024-05-15 03:03:13.527529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.527579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.527595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.527605] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.527614] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.538029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.495 [2024-05-15 03:03:13.547558] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.547612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.547628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.547637] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.547646] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.558022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.495 [2024-05-15 03:03:13.567605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.567658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.567673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.567682] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.567691] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.578038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.495 [2024-05-15 03:03:13.587743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.587794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.587810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.587820] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.587828] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.598212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.495 [2024-05-15 03:03:13.607803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.607859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.607875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.607885] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.607900] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.618113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.495 [2024-05-15 03:03:13.627873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.627933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.627949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.627958] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.627967] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.638126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.495 [2024-05-15 03:03:13.647940] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.647988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.648008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.648017] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.648027] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.658374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.495 [2024-05-15 03:03:13.668065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.668117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.668133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.668143] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.668151] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.678448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.495 [2024-05-15 03:03:13.688093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.688144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.688160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.688169] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.688178] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.698600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.495 [2024-05-15 03:03:13.708114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.708167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.708182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.708192] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.708201] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.718688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.495 [2024-05-15 03:03:13.728179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.728225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.728240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.728250] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.728261] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.738615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.495 [2024-05-15 03:03:13.748247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.748291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.748306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.748315] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.748324] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.758577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.495 [2024-05-15 03:03:13.768416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.495 [2024-05-15 03:03:13.768465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.495 [2024-05-15 03:03:13.768481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.495 [2024-05-15 03:03:13.768490] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.495 [2024-05-15 03:03:13.768498] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.495 [2024-05-15 03:03:13.778769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.495 qpair failed and we were unable to recover it. 00:37:10.779 [2024-05-15 03:03:13.788378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.779 [2024-05-15 03:03:13.788436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.779 [2024-05-15 03:03:13.788453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.779 [2024-05-15 03:03:13.788463] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.779 [2024-05-15 03:03:13.788472] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.779 [2024-05-15 03:03:13.798868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.779 qpair failed and we were unable to recover it. 00:37:10.779 [2024-05-15 03:03:13.808378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.779 [2024-05-15 03:03:13.808426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.779 [2024-05-15 03:03:13.808442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.779 [2024-05-15 03:03:13.808452] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.779 [2024-05-15 03:03:13.808460] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.779 [2024-05-15 03:03:13.818973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.779 qpair failed and we were unable to recover it. 00:37:10.779 [2024-05-15 03:03:13.828462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.779 [2024-05-15 03:03:13.828515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.779 [2024-05-15 03:03:13.828531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.779 [2024-05-15 03:03:13.828540] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.779 [2024-05-15 03:03:13.828549] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.779 [2024-05-15 03:03:13.839060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.779 qpair failed and we were unable to recover it. 00:37:10.779 [2024-05-15 03:03:13.848602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.779 [2024-05-15 03:03:13.848657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.779 [2024-05-15 03:03:13.848672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.779 [2024-05-15 03:03:13.848681] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.779 [2024-05-15 03:03:13.848690] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.779 [2024-05-15 03:03:13.858835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.779 qpair failed and we were unable to recover it. 00:37:10.779 [2024-05-15 03:03:13.868571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.779 [2024-05-15 03:03:13.868622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.779 [2024-05-15 03:03:13.868637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.779 [2024-05-15 03:03:13.868647] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.779 [2024-05-15 03:03:13.868656] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.779 [2024-05-15 03:03:13.879162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.779 qpair failed and we were unable to recover it. 00:37:10.779 [2024-05-15 03:03:13.888676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.779 [2024-05-15 03:03:13.888721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.779 [2024-05-15 03:03:13.888737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.779 [2024-05-15 03:03:13.888746] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.779 [2024-05-15 03:03:13.888755] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.779 [2024-05-15 03:03:13.899108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.779 qpair failed and we were unable to recover it. 00:37:10.779 [2024-05-15 03:03:13.908720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.779 [2024-05-15 03:03:13.908772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.779 [2024-05-15 03:03:13.908788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.779 [2024-05-15 03:03:13.908801] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.779 [2024-05-15 03:03:13.908810] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.779 [2024-05-15 03:03:13.919163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.779 qpair failed and we were unable to recover it. 00:37:10.779 [2024-05-15 03:03:13.928842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.779 [2024-05-15 03:03:13.928900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.779 [2024-05-15 03:03:13.928916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.779 [2024-05-15 03:03:13.928926] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.779 [2024-05-15 03:03:13.928934] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.779 [2024-05-15 03:03:13.939221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.779 qpair failed and we were unable to recover it. 00:37:10.779 [2024-05-15 03:03:13.948936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.780 [2024-05-15 03:03:13.948993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.780 [2024-05-15 03:03:13.949009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.780 [2024-05-15 03:03:13.949019] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.780 [2024-05-15 03:03:13.949027] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.780 [2024-05-15 03:03:13.959268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.780 qpair failed and we were unable to recover it. 00:37:10.780 [2024-05-15 03:03:13.968980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.780 [2024-05-15 03:03:13.969033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.780 [2024-05-15 03:03:13.969048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.780 [2024-05-15 03:03:13.969057] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.780 [2024-05-15 03:03:13.969066] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.780 [2024-05-15 03:03:13.979550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.780 qpair failed and we were unable to recover it. 00:37:10.780 [2024-05-15 03:03:13.988963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.780 [2024-05-15 03:03:13.989011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.780 [2024-05-15 03:03:13.989027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.780 [2024-05-15 03:03:13.989036] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.780 [2024-05-15 03:03:13.989045] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.780 [2024-05-15 03:03:13.999473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.780 qpair failed and we were unable to recover it. 00:37:10.780 [2024-05-15 03:03:14.009087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.780 [2024-05-15 03:03:14.009143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.780 [2024-05-15 03:03:14.009159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.780 [2024-05-15 03:03:14.009168] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.780 [2024-05-15 03:03:14.009177] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.780 [2024-05-15 03:03:14.019441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.780 qpair failed and we were unable to recover it. 00:37:10.780 [2024-05-15 03:03:14.029124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.780 [2024-05-15 03:03:14.029174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.780 [2024-05-15 03:03:14.029189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.780 [2024-05-15 03:03:14.029198] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.780 [2024-05-15 03:03:14.029207] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.780 [2024-05-15 03:03:14.039643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.780 qpair failed and we were unable to recover it. 00:37:10.780 [2024-05-15 03:03:14.049158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:10.780 [2024-05-15 03:03:14.049209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:10.780 [2024-05-15 03:03:14.049225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:10.780 [2024-05-15 03:03:14.049234] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:10.780 [2024-05-15 03:03:14.049243] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:10.780 [2024-05-15 03:03:14.059496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:10.780 qpair failed and we were unable to recover it. 00:37:11.045 [2024-05-15 03:03:14.069268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.045 [2024-05-15 03:03:14.069313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.045 [2024-05-15 03:03:14.069329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.045 [2024-05-15 03:03:14.069338] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.045 [2024-05-15 03:03:14.069347] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.045 [2024-05-15 03:03:14.079610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.045 qpair failed and we were unable to recover it. 00:37:11.045 [2024-05-15 03:03:14.089426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.045 [2024-05-15 03:03:14.089476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.045 [2024-05-15 03:03:14.089494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.045 [2024-05-15 03:03:14.089504] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.045 [2024-05-15 03:03:14.089512] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.045 [2024-05-15 03:03:14.099680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.045 qpair failed and we were unable to recover it. 00:37:11.045 [2024-05-15 03:03:14.109494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.045 [2024-05-15 03:03:14.109550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.045 [2024-05-15 03:03:14.109565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.045 [2024-05-15 03:03:14.109575] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.045 [2024-05-15 03:03:14.109584] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.045 [2024-05-15 03:03:14.119750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.045 qpair failed and we were unable to recover it. 00:37:11.045 [2024-05-15 03:03:14.129539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.045 [2024-05-15 03:03:14.129586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.045 [2024-05-15 03:03:14.129602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.045 [2024-05-15 03:03:14.129611] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.045 [2024-05-15 03:03:14.129620] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.045 [2024-05-15 03:03:14.139789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.045 qpair failed and we were unable to recover it. 00:37:11.045 [2024-05-15 03:03:14.149593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.045 [2024-05-15 03:03:14.149646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.045 [2024-05-15 03:03:14.149662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.045 [2024-05-15 03:03:14.149671] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.045 [2024-05-15 03:03:14.149681] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.045 [2024-05-15 03:03:14.159719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.045 qpair failed and we were unable to recover it. 00:37:11.045 [2024-05-15 03:03:14.169630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.045 [2024-05-15 03:03:14.169681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.045 [2024-05-15 03:03:14.169697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.045 [2024-05-15 03:03:14.169706] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.045 [2024-05-15 03:03:14.169719] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.045 [2024-05-15 03:03:14.179950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.045 qpair failed and we were unable to recover it. 00:37:11.045 [2024-05-15 03:03:14.189691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.045 [2024-05-15 03:03:14.189748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.045 [2024-05-15 03:03:14.189764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.045 [2024-05-15 03:03:14.189774] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.046 [2024-05-15 03:03:14.189783] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.046 [2024-05-15 03:03:14.200051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.046 qpair failed and we were unable to recover it. 00:37:11.046 [2024-05-15 03:03:14.209725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.046 [2024-05-15 03:03:14.209773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.046 [2024-05-15 03:03:14.209789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.046 [2024-05-15 03:03:14.209798] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.046 [2024-05-15 03:03:14.209807] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.046 [2024-05-15 03:03:14.220178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.046 qpair failed and we were unable to recover it. 00:37:11.046 [2024-05-15 03:03:14.229719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.046 [2024-05-15 03:03:14.229768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.046 [2024-05-15 03:03:14.229784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.046 [2024-05-15 03:03:14.229793] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.046 [2024-05-15 03:03:14.229802] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.046 [2024-05-15 03:03:14.239929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.046 qpair failed and we were unable to recover it. 00:37:11.046 [2024-05-15 03:03:14.249837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.046 [2024-05-15 03:03:14.249886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.046 [2024-05-15 03:03:14.249905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.046 [2024-05-15 03:03:14.249914] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.046 [2024-05-15 03:03:14.249923] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.046 [2024-05-15 03:03:14.260228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.046 qpair failed and we were unable to recover it. 00:37:11.046 [2024-05-15 03:03:14.269854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.046 [2024-05-15 03:03:14.269917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.046 [2024-05-15 03:03:14.269933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.046 [2024-05-15 03:03:14.269942] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.046 [2024-05-15 03:03:14.269951] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.046 [2024-05-15 03:03:14.280163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.046 qpair failed and we were unable to recover it. 00:37:11.046 [2024-05-15 03:03:14.289970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.046 [2024-05-15 03:03:14.290023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.046 [2024-05-15 03:03:14.290039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.046 [2024-05-15 03:03:14.290048] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.046 [2024-05-15 03:03:14.290057] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.046 [2024-05-15 03:03:14.300410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.046 qpair failed and we were unable to recover it. 00:37:11.046 [2024-05-15 03:03:14.309946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.046 [2024-05-15 03:03:14.309991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.046 [2024-05-15 03:03:14.310007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.046 [2024-05-15 03:03:14.310016] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.046 [2024-05-15 03:03:14.310025] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.046 [2024-05-15 03:03:14.320394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.046 qpair failed and we were unable to recover it. 00:37:11.046 [2024-05-15 03:03:14.330031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.046 [2024-05-15 03:03:14.330080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.046 [2024-05-15 03:03:14.330096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.046 [2024-05-15 03:03:14.330105] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.046 [2024-05-15 03:03:14.330114] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.305 [2024-05-15 03:03:14.340141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.305 qpair failed and we were unable to recover it. 00:37:11.305 [2024-05-15 03:03:14.350141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.305 [2024-05-15 03:03:14.350191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.305 [2024-05-15 03:03:14.350207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.305 [2024-05-15 03:03:14.350219] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.306 [2024-05-15 03:03:14.350228] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.306 [2024-05-15 03:03:14.360581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.306 qpair failed and we were unable to recover it. 00:37:11.306 [2024-05-15 03:03:14.370194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.306 [2024-05-15 03:03:14.370241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.306 [2024-05-15 03:03:14.370257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.306 [2024-05-15 03:03:14.370267] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.306 [2024-05-15 03:03:14.370275] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.306 [2024-05-15 03:03:14.380563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.306 qpair failed and we were unable to recover it. 00:37:11.306 [2024-05-15 03:03:14.390179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.306 [2024-05-15 03:03:14.390231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.306 [2024-05-15 03:03:14.390247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.306 [2024-05-15 03:03:14.390256] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.306 [2024-05-15 03:03:14.390265] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.306 [2024-05-15 03:03:14.400617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.306 qpair failed and we were unable to recover it. 00:37:11.306 [2024-05-15 03:03:14.410298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:11.306 [2024-05-15 03:03:14.410349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:11.306 [2024-05-15 03:03:14.410365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:11.306 [2024-05-15 03:03:14.410374] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:11.306 [2024-05-15 03:03:14.410383] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:37:11.306 [2024-05-15 03:03:14.420693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:11.306 qpair failed and we were unable to recover it. 00:37:11.306 [2024-05-15 03:03:14.420767] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:37:11.306 A controller has encountered a failure and is being reset. 00:37:11.306 [2024-05-15 03:03:14.420888] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:37:11.306 [2024-05-15 03:03:14.422834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:37:11.306 Controller properly reset. 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Write completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 Read completed with error (sct=0, sc=8) 00:37:12.244 starting I/O failed 00:37:12.244 [2024-05-15 03:03:15.436348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Read completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 Write completed with error (sct=0, sc=8) 00:37:13.182 starting I/O failed 00:37:13.182 [2024-05-15 03:03:16.448540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:13.442 Initializing NVMe Controllers 00:37:13.442 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:13.442 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:13.442 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:13.442 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:13.442 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:13.442 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:13.442 Initialization complete. Launching workers. 00:37:13.442 Starting thread on core 1 00:37:13.442 Starting thread on core 2 00:37:13.442 Starting thread on core 3 00:37:13.442 Starting thread on core 0 00:37:13.442 03:03:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:13.442 00:37:13.442 real 0m13.066s 00:37:13.442 user 0m24.241s 00:37:13.442 sys 0m3.806s 00:37:13.442 03:03:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:37:13.442 03:03:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:13.442 ************************************ 00:37:13.442 END TEST nvmf_target_disconnect_tc2 00:37:13.442 ************************************ 00:37:13.442 03:03:16 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:37:13.442 03:03:16 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:37:13.442 03:03:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:37:13.442 03:03:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:37:13.442 03:03:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:13.442 ************************************ 00:37:13.442 START TEST nvmf_target_disconnect_tc3 00:37:13.442 ************************************ 00:37:13.442 03:03:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc3 00:37:13.443 03:03:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=1014603 00:37:13.443 03:03:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:37:13.443 03:03:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:37:13.443 EAL: No free 2048 kB hugepages reported on node 1 00:37:15.351 03:03:18 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 1013508 00:37:15.351 03:03:18 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:37:16.731 Read completed with error (sct=0, sc=8) 00:37:16.731 starting I/O failed 00:37:16.731 Read completed with error (sct=0, sc=8) 00:37:16.731 starting I/O failed 00:37:16.731 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Write completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Write completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Write completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Write completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Write completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Write completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Write completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Write completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Write completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Write completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 Read completed with error (sct=0, sc=8) 00:37:16.732 starting I/O failed 00:37:16.732 [2024-05-15 03:03:19.863710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:17.671 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 1013508 Killed "${NVMF_APP[@]}" "$@" 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1015150 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1015150 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@828 -- # '[' -z 1015150 ']' 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:37:17.671 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:17.671 [2024-05-15 03:03:20.696646] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:37:17.671 [2024-05-15 03:03:20.696723] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:17.671 EAL: No free 2048 kB hugepages reported on node 1 00:37:17.671 [2024-05-15 03:03:20.808111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:17.671 [2024-05-15 03:03:20.854855] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:17.671 [2024-05-15 03:03:20.854910] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:17.671 [2024-05-15 03:03:20.854925] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:17.671 [2024-05-15 03:03:20.854938] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:17.671 [2024-05-15 03:03:20.854948] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:17.671 [2024-05-15 03:03:20.855072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:37:17.671 [2024-05-15 03:03:20.855172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:37:17.671 [2024-05-15 03:03:20.855274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:37:17.671 [2024-05-15 03:03:20.855273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Write completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 Read completed with error (sct=0, sc=8) 00:37:17.671 starting I/O failed 00:37:17.671 [2024-05-15 03:03:20.868824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:17.930 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:37:17.930 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@861 -- # return 0 00:37:17.930 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:17.930 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:17.930 03:03:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:17.930 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:17.930 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:17.930 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:17.930 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:17.930 Malloc0 00:37:17.930 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:17.930 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:37:17.930 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:17.930 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:17.930 [2024-05-15 03:03:21.082485] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6efe70/0x6fc400) succeed. 00:37:17.930 [2024-05-15 03:03:21.098121] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6f14b0/0x79c490) succeed. 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:18.190 [2024-05-15 03:03:21.279760] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:18.190 [2024-05-15 03:03:21.280196] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.190 03:03:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 1014603 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Read completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 Write completed with error (sct=0, sc=8) 00:37:18.756 starting I/O failed 00:37:18.756 [2024-05-15 03:03:21.873946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.756 [2024-05-15 03:03:21.875546] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:18.756 [2024-05-15 03:03:21.875572] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:18.756 [2024-05-15 03:03:21.875593] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:19.693 [2024-05-15 03:03:22.879311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.693 qpair failed and we were unable to recover it. 00:37:19.693 [2024-05-15 03:03:22.880744] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:19.693 [2024-05-15 03:03:22.880770] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:19.693 [2024-05-15 03:03:22.880782] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:20.629 [2024-05-15 03:03:23.884599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.629 qpair failed and we were unable to recover it. 00:37:20.629 [2024-05-15 03:03:23.886006] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:20.629 [2024-05-15 03:03:23.886031] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:20.629 [2024-05-15 03:03:23.886043] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:22.005 [2024-05-15 03:03:24.889800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.005 qpair failed and we were unable to recover it. 00:37:22.005 [2024-05-15 03:03:24.891135] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:22.005 [2024-05-15 03:03:24.891158] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:22.005 [2024-05-15 03:03:24.891171] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:22.941 [2024-05-15 03:03:25.895174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.941 qpair failed and we were unable to recover it. 00:37:22.941 [2024-05-15 03:03:25.896656] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:22.941 [2024-05-15 03:03:25.896679] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:22.941 [2024-05-15 03:03:25.896691] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:23.878 [2024-05-15 03:03:26.900518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.878 qpair failed and we were unable to recover it. 00:37:23.878 [2024-05-15 03:03:26.901987] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:23.878 [2024-05-15 03:03:26.902012] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:23.878 [2024-05-15 03:03:26.902025] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:24.816 [2024-05-15 03:03:27.905816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:24.816 qpair failed and we were unable to recover it. 00:37:24.816 [2024-05-15 03:03:27.907320] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:24.816 [2024-05-15 03:03:27.907344] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:24.816 [2024-05-15 03:03:27.907357] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:37:25.752 [2024-05-15 03:03:28.911164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:25.752 qpair failed and we were unable to recover it. 00:37:26.690 Read completed with error (sct=0, sc=8) 00:37:26.690 starting I/O failed 00:37:26.690 Read completed with error (sct=0, sc=8) 00:37:26.690 starting I/O failed 00:37:26.690 Read completed with error (sct=0, sc=8) 00:37:26.690 starting I/O failed 00:37:26.690 Write completed with error (sct=0, sc=8) 00:37:26.690 starting I/O failed 00:37:26.690 Write completed with error (sct=0, sc=8) 00:37:26.690 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Read completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Read completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Read completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Read completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Read completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Read completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Read completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Read completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Read completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Read completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Read completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Read completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 Write completed with error (sct=0, sc=8) 00:37:26.691 starting I/O failed 00:37:26.691 [2024-05-15 03:03:29.916267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:26.691 [2024-05-15 03:03:29.917892] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:26.691 [2024-05-15 03:03:29.917921] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:26.691 [2024-05-15 03:03:29.917934] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:37:28.070 [2024-05-15 03:03:30.921776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:28.070 qpair failed and we were unable to recover it. 00:37:28.070 [2024-05-15 03:03:30.923353] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:37:28.070 [2024-05-15 03:03:30.923382] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:37:28.070 [2024-05-15 03:03:30.923394] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:37:28.639 [2024-05-15 03:03:31.927205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:28.639 qpair failed and we were unable to recover it. 00:37:28.639 [2024-05-15 03:03:31.927355] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:37:28.639 A controller has encountered a failure and is being reset. 00:37:28.639 Resorting to new failover address 192.168.100.9 00:37:28.639 [2024-05-15 03:03:31.927457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:28.639 [2024-05-15 03:03:31.927537] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:37:28.898 [2024-05-15 03:03:31.960555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:37:28.898 Controller properly reset. 00:37:28.898 Initializing NVMe Controllers 00:37:28.898 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:28.898 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:28.898 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:28.898 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:28.898 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:28.898 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:28.898 Initialization complete. Launching workers. 00:37:28.898 Starting thread on core 1 00:37:28.898 Starting thread on core 2 00:37:28.898 Starting thread on core 3 00:37:28.898 Starting thread on core 0 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:37:28.898 00:37:28.898 real 0m15.426s 00:37:28.898 user 0m59.708s 00:37:28.898 sys 0m5.298s 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:28.898 ************************************ 00:37:28.898 END TEST nvmf_target_disconnect_tc3 00:37:28.898 ************************************ 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:37:28.898 rmmod nvme_rdma 00:37:28.898 rmmod nvme_fabrics 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1015150 ']' 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1015150 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # '[' -z 1015150 ']' 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # kill -0 1015150 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # uname 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:37:28.898 03:03:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1015150 00:37:29.171 03:03:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_4 00:37:29.171 03:03:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_4 = sudo ']' 00:37:29.171 03:03:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1015150' 00:37:29.171 killing process with pid 1015150 00:37:29.171 03:03:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # kill 1015150 00:37:29.171 [2024-05-15 03:03:32.204005] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:29.171 03:03:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # wait 1015150 00:37:29.171 [2024-05-15 03:03:32.311003] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:37:29.430 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:29.430 03:03:32 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:37:29.430 00:37:29.430 real 0m36.826s 00:37:29.430 user 2m12.726s 00:37:29.430 sys 0m14.808s 00:37:29.431 03:03:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:37:29.431 03:03:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:29.431 ************************************ 00:37:29.431 END TEST nvmf_target_disconnect 00:37:29.431 ************************************ 00:37:29.431 03:03:32 nvmf_rdma -- nvmf/nvmf.sh@125 -- # timing_exit host 00:37:29.431 03:03:32 nvmf_rdma -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:29.431 03:03:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:29.431 03:03:32 nvmf_rdma -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:37:29.431 00:37:29.431 real 30m3.756s 00:37:29.431 user 86m46.805s 00:37:29.431 sys 6m38.309s 00:37:29.431 03:03:32 nvmf_rdma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:37:29.431 03:03:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:29.431 ************************************ 00:37:29.431 END TEST nvmf_rdma 00:37:29.431 ************************************ 00:37:29.431 03:03:32 -- spdk/autotest.sh@281 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:37:29.431 03:03:32 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:37:29.431 03:03:32 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:37:29.431 03:03:32 -- common/autotest_common.sh@10 -- # set +x 00:37:29.431 ************************************ 00:37:29.431 START TEST spdkcli_nvmf_rdma 00:37:29.431 ************************************ 00:37:29.431 03:03:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:37:29.690 * Looking for test storage... 00:37:29.690 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00e1c02b-5999-e811-99d6-a4bf01488b4e 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00e1c02b-5999-e811-99d6-a4bf01488b4e 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.690 03:03:32 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1016843 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 1016843 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@828 -- # '[' -z 1016843 ']' 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@833 -- # local max_retries=100 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:29.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@837 -- # xtrace_disable 00:37:29.691 03:03:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:29.691 [2024-05-15 03:03:32.864792] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:37:29.691 [2024-05-15 03:03:32.864868] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016843 ] 00:37:29.691 EAL: No free 2048 kB hugepages reported on node 1 00:37:29.691 [2024-05-15 03:03:32.973577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:29.950 [2024-05-15 03:03:33.024704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.950 [2024-05-15 03:03:33.024709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@861 -- # return 0 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:37:29.950 03:03:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:36.590 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:37:36.591 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:37:36.591 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:37:36.591 Found net devices under 0000:18:00.0: mlx_0_0 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:37:36.591 Found net devices under 0000:18:00.1: mlx_0_1 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:37:36.591 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:36.591 link/ether 50:6b:4b:bd:38:b2 brd ff:ff:ff:ff:ff:ff 00:37:36.591 altname enp24s0f0np0 00:37:36.591 altname ens785f0np0 00:37:36.591 inet 192.168.100.8/24 scope global mlx_0_0 00:37:36.591 valid_lft forever preferred_lft forever 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:37:36.591 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:36.591 link/ether 50:6b:4b:bd:38:b3 brd ff:ff:ff:ff:ff:ff 00:37:36.591 altname enp24s0f1np1 00:37:36.591 altname ens785f1np1 00:37:36.591 inet 192.168.100.9/24 scope global mlx_0_1 00:37:36.591 valid_lft forever preferred_lft forever 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:37:36.591 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:37:36.592 192.168.100.9' 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:37:36.592 192.168.100.9' 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:37:36.592 192.168.100.9' 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:37:36.592 03:03:39 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:37:36.852 03:03:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:37:36.852 03:03:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:36.852 03:03:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:36.852 03:03:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:36.852 03:03:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:36.852 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:36.852 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:36.852 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:36.852 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:36.852 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:36.852 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:36.852 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:37:36.852 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:37:36.852 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:36.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:36.852 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:36.852 ' 00:37:39.390 [2024-05-15 03:03:42.658961] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbac790/0xa340c0) succeed. 00:37:39.390 [2024-05-15 03:03:42.675136] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbadc90/0xb1f180) succeed. 00:37:40.768 [2024-05-15 03:03:44.036879] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:40.768 [2024-05-15 03:03:44.037230] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:37:43.304 [2024-05-15 03:03:46.425314] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:37:45.210 [2024-05-15 03:03:48.488506] rdma.c:3032:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:37:47.243 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:47.243 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:47.243 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:47.243 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:47.243 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:47.243 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:47.243 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:47.243 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:37:47.243 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:37:47.243 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:47.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:47.243 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:47.243 03:03:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:47.243 03:03:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:47.243 03:03:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:47.243 03:03:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:47.243 03:03:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:47.243 03:03:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:47.243 03:03:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:37:47.243 03:03:50 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:47.501 03:03:50 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:47.501 03:03:50 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:47.501 03:03:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:47.501 03:03:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:47.501 03:03:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:47.501 03:03:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:47.501 03:03:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:47.502 03:03:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:47.502 03:03:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:47.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:47.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:47.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:47.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:37:47.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:37:47.502 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:47.502 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:47.502 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:47.502 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:47.502 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:47.502 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:47.502 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:47.502 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:47.502 ' 00:37:52.774 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:52.774 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:52.774 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:52.774 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:52.774 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:37:52.774 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:37:52.774 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:52.774 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:52.774 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:52.774 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:52.774 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:52.774 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:52.774 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:52.774 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 1016843 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@947 -- # '[' -z 1016843 ']' 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@951 -- # kill -0 1016843 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # uname 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1016843 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1016843' 00:37:52.774 killing process with pid 1016843 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@966 -- # kill 1016843 00:37:52.774 [2024-05-15 03:03:55.862490] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:52.774 03:03:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@971 -- # wait 1016843 00:37:52.774 [2024-05-15 03:03:55.930436] rdma.c:2885:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:37:53.033 rmmod nvme_rdma 00:37:53.033 rmmod nvme_fabrics 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:37:53.033 00:37:53.033 real 0m23.473s 00:37:53.033 user 0m50.812s 00:37:53.033 sys 0m6.285s 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:37:53.033 03:03:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:53.033 ************************************ 00:37:53.033 END TEST spdkcli_nvmf_rdma 00:37:53.033 ************************************ 00:37:53.033 03:03:56 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:37:53.033 03:03:56 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:53.033 03:03:56 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:53.033 03:03:56 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:37:53.033 03:03:56 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:37:53.033 03:03:56 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:37:53.033 03:03:56 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:53.033 03:03:56 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:53.033 03:03:56 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:53.033 03:03:56 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:37:53.033 03:03:56 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:53.033 03:03:56 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:37:53.033 03:03:56 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:53.033 03:03:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:53.033 03:03:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:53.033 03:03:56 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:37:53.033 03:03:56 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:37:53.033 03:03:56 -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:53.033 03:03:56 -- common/autotest_common.sh@10 -- # set +x 00:37:53.033 03:03:56 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:37:53.033 03:03:56 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:37:53.033 03:03:56 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:37:53.033 03:03:56 -- common/autotest_common.sh@10 -- # set +x 00:37:57.229 INFO: APP EXITING 00:37:57.229 INFO: killing all VMs 00:37:57.229 INFO: killing vhost app 00:37:57.229 INFO: EXIT DONE 00:38:00.527 Waiting for block devices as requested 00:38:00.527 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:38:00.527 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:00.527 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:00.527 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:00.527 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:00.527 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:00.527 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:00.785 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:00.785 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:00.785 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:01.045 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:01.045 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:01.045 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:01.304 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:01.304 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:01.304 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:01.563 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:04.852 Cleaning 00:38:04.852 Removing: /var/run/dpdk/spdk0/config 00:38:04.852 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:04.852 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:04.852 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:04.852 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:04.852 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:04.853 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:04.853 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:04.853 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:04.853 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:04.853 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:04.853 Removing: /var/run/dpdk/spdk1/config 00:38:04.853 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:04.853 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:04.853 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:04.853 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:04.853 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:04.853 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:04.853 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:04.853 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:04.853 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:04.853 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:04.853 Removing: /var/run/dpdk/spdk1/mp_socket 00:38:04.853 Removing: /var/run/dpdk/spdk2/config 00:38:04.853 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:04.853 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:04.853 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:04.853 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:04.853 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:04.853 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:04.853 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:04.853 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:04.853 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:04.853 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:04.853 Removing: /var/run/dpdk/spdk3/config 00:38:04.853 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:04.853 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:04.853 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:04.853 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:04.853 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:04.853 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:04.853 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:04.853 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:04.853 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:04.853 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:04.853 Removing: /var/run/dpdk/spdk4/config 00:38:04.853 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:04.853 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:04.853 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:04.853 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:04.853 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:04.853 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:04.853 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:04.853 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:04.853 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:04.853 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:04.853 Removing: /dev/shm/bdevperf_trace.pid792385 00:38:04.853 Removing: /dev/shm/bdevperf_trace.pid933485 00:38:04.853 Removing: /dev/shm/bdev_svc_trace.1 00:38:04.853 Removing: /dev/shm/nvmf_trace.0 00:38:04.853 Removing: /dev/shm/spdk_tgt_trace.pid658890 00:38:04.853 Removing: /var/run/dpdk/spdk0 00:38:04.853 Removing: /var/run/dpdk/spdk1 00:38:04.853 Removing: /var/run/dpdk/spdk2 00:38:04.853 Removing: /var/run/dpdk/spdk3 00:38:04.853 Removing: /var/run/dpdk/spdk4 00:38:04.853 Removing: /var/run/dpdk/spdk_pid1007119 00:38:04.853 Removing: /var/run/dpdk/spdk_pid1007316 00:38:04.853 Removing: /var/run/dpdk/spdk_pid1012325 00:38:04.853 Removing: /var/run/dpdk/spdk_pid1012703 00:38:04.853 Removing: /var/run/dpdk/spdk_pid1014603 00:38:04.853 Removing: /var/run/dpdk/spdk_pid1016843 00:38:04.853 Removing: /var/run/dpdk/spdk_pid658384 00:38:04.853 Removing: /var/run/dpdk/spdk_pid658890 00:38:04.853 Removing: /var/run/dpdk/spdk_pid659308 00:38:04.853 Removing: /var/run/dpdk/spdk_pid660059 00:38:05.112 Removing: /var/run/dpdk/spdk_pid660204 00:38:05.112 Removing: /var/run/dpdk/spdk_pid660988 00:38:05.112 Removing: /var/run/dpdk/spdk_pid661161 00:38:05.112 Removing: /var/run/dpdk/spdk_pid661372 00:38:05.112 Removing: /var/run/dpdk/spdk_pid665144 00:38:05.112 Removing: /var/run/dpdk/spdk_pid665648 00:38:05.112 Removing: /var/run/dpdk/spdk_pid665879 00:38:05.112 Removing: /var/run/dpdk/spdk_pid666114 00:38:05.112 Removing: /var/run/dpdk/spdk_pid666365 00:38:05.112 Removing: /var/run/dpdk/spdk_pid666611 00:38:05.112 Removing: /var/run/dpdk/spdk_pid666818 00:38:05.112 Removing: /var/run/dpdk/spdk_pid667018 00:38:05.112 Removing: /var/run/dpdk/spdk_pid667251 00:38:05.112 Removing: /var/run/dpdk/spdk_pid667950 00:38:05.112 Removing: /var/run/dpdk/spdk_pid670437 00:38:05.112 Removing: /var/run/dpdk/spdk_pid670647 00:38:05.112 Removing: /var/run/dpdk/spdk_pid670867 00:38:05.112 Removing: /var/run/dpdk/spdk_pid671034 00:38:05.112 Removing: /var/run/dpdk/spdk_pid671442 00:38:05.112 Removing: /var/run/dpdk/spdk_pid671525 00:38:05.112 Removing: /var/run/dpdk/spdk_pid672017 00:38:05.112 Removing: /var/run/dpdk/spdk_pid672030 00:38:05.112 Removing: /var/run/dpdk/spdk_pid672348 00:38:05.112 Removing: /var/run/dpdk/spdk_pid672418 00:38:05.112 Removing: /var/run/dpdk/spdk_pid672628 00:38:05.112 Removing: /var/run/dpdk/spdk_pid672646 00:38:05.112 Removing: /var/run/dpdk/spdk_pid673100 00:38:05.112 Removing: /var/run/dpdk/spdk_pid673303 00:38:05.112 Removing: /var/run/dpdk/spdk_pid673550 00:38:05.112 Removing: /var/run/dpdk/spdk_pid673767 00:38:05.112 Removing: /var/run/dpdk/spdk_pid673788 00:38:05.112 Removing: /var/run/dpdk/spdk_pid673955 00:38:05.112 Removing: /var/run/dpdk/spdk_pid674203 00:38:05.112 Removing: /var/run/dpdk/spdk_pid674429 00:38:05.112 Removing: /var/run/dpdk/spdk_pid674629 00:38:05.112 Removing: /var/run/dpdk/spdk_pid674927 00:38:05.112 Removing: /var/run/dpdk/spdk_pid675159 00:38:05.112 Removing: /var/run/dpdk/spdk_pid675485 00:38:05.112 Removing: /var/run/dpdk/spdk_pid675928 00:38:05.112 Removing: /var/run/dpdk/spdk_pid676148 00:38:05.112 Removing: /var/run/dpdk/spdk_pid676347 00:38:05.112 Removing: /var/run/dpdk/spdk_pid676546 00:38:05.112 Removing: /var/run/dpdk/spdk_pid676752 00:38:05.112 Removing: /var/run/dpdk/spdk_pid676959 00:38:05.112 Removing: /var/run/dpdk/spdk_pid677212 00:38:05.112 Removing: /var/run/dpdk/spdk_pid677451 00:38:05.112 Removing: /var/run/dpdk/spdk_pid677701 00:38:05.112 Removing: /var/run/dpdk/spdk_pid677932 00:38:05.112 Removing: /var/run/dpdk/spdk_pid678138 00:38:05.112 Removing: /var/run/dpdk/spdk_pid678342 00:38:05.112 Removing: /var/run/dpdk/spdk_pid678550 00:38:05.112 Removing: /var/run/dpdk/spdk_pid678751 00:38:05.112 Removing: /var/run/dpdk/spdk_pid678909 00:38:05.112 Removing: /var/run/dpdk/spdk_pid679074 00:38:05.112 Removing: /var/run/dpdk/spdk_pid682566 00:38:05.112 Removing: /var/run/dpdk/spdk_pid759201 00:38:05.112 Removing: /var/run/dpdk/spdk_pid762756 00:38:05.112 Removing: /var/run/dpdk/spdk_pid771396 00:38:05.112 Removing: /var/run/dpdk/spdk_pid776337 00:38:05.112 Removing: /var/run/dpdk/spdk_pid779427 00:38:05.112 Removing: /var/run/dpdk/spdk_pid779989 00:38:05.112 Removing: /var/run/dpdk/spdk_pid792385 00:38:05.112 Removing: /var/run/dpdk/spdk_pid792582 00:38:05.372 Removing: /var/run/dpdk/spdk_pid796228 00:38:05.372 Removing: /var/run/dpdk/spdk_pid801165 00:38:05.372 Removing: /var/run/dpdk/spdk_pid803408 00:38:05.372 Removing: /var/run/dpdk/spdk_pid811941 00:38:05.372 Removing: /var/run/dpdk/spdk_pid832977 00:38:05.372 Removing: /var/run/dpdk/spdk_pid836235 00:38:05.372 Removing: /var/run/dpdk/spdk_pid866482 00:38:05.372 Removing: /var/run/dpdk/spdk_pid870427 00:38:05.372 Removing: /var/run/dpdk/spdk_pid897395 00:38:05.372 Removing: /var/run/dpdk/spdk_pid910034 00:38:05.372 Removing: /var/run/dpdk/spdk_pid931889 00:38:05.372 Removing: /var/run/dpdk/spdk_pid932595 00:38:05.372 Removing: /var/run/dpdk/spdk_pid933485 00:38:05.372 Removing: /var/run/dpdk/spdk_pid937054 00:38:05.372 Removing: /var/run/dpdk/spdk_pid943576 00:38:05.372 Removing: /var/run/dpdk/spdk_pid944302 00:38:05.372 Removing: /var/run/dpdk/spdk_pid945023 00:38:05.372 Removing: /var/run/dpdk/spdk_pid945752 00:38:05.372 Removing: /var/run/dpdk/spdk_pid946025 00:38:05.372 Removing: /var/run/dpdk/spdk_pid949888 00:38:05.372 Removing: /var/run/dpdk/spdk_pid949935 00:38:05.372 Removing: /var/run/dpdk/spdk_pid953422 00:38:05.372 Removing: /var/run/dpdk/spdk_pid953795 00:38:05.372 Removing: /var/run/dpdk/spdk_pid954331 00:38:05.372 Removing: /var/run/dpdk/spdk_pid954871 00:38:05.372 Removing: /var/run/dpdk/spdk_pid954882 00:38:05.372 Removing: /var/run/dpdk/spdk_pid956160 00:38:05.372 Removing: /var/run/dpdk/spdk_pid957604 00:38:05.372 Removing: /var/run/dpdk/spdk_pid959041 00:38:05.372 Removing: /var/run/dpdk/spdk_pid960498 00:38:05.372 Removing: /var/run/dpdk/spdk_pid961941 00:38:05.372 Removing: /var/run/dpdk/spdk_pid963383 00:38:05.372 Removing: /var/run/dpdk/spdk_pid968360 00:38:05.372 Removing: /var/run/dpdk/spdk_pid968769 00:38:05.372 Removing: /var/run/dpdk/spdk_pid969414 00:38:05.372 Removing: /var/run/dpdk/spdk_pid970056 00:38:05.372 Removing: /var/run/dpdk/spdk_pid974427 00:38:05.372 Removing: /var/run/dpdk/spdk_pid976975 00:38:05.372 Removing: /var/run/dpdk/spdk_pid981599 00:38:05.372 Removing: /var/run/dpdk/spdk_pid990408 00:38:05.372 Removing: /var/run/dpdk/spdk_pid990418 00:38:05.372 Clean 00:38:05.631 03:04:08 -- common/autotest_common.sh@1448 -- # return 0 00:38:05.631 03:04:08 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:38:05.631 03:04:08 -- common/autotest_common.sh@727 -- # xtrace_disable 00:38:05.631 03:04:08 -- common/autotest_common.sh@10 -- # set +x 00:38:05.631 03:04:08 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:38:05.631 03:04:08 -- common/autotest_common.sh@727 -- # xtrace_disable 00:38:05.631 03:04:08 -- common/autotest_common.sh@10 -- # set +x 00:38:05.631 03:04:08 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:38:05.631 03:04:08 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:38:05.631 03:04:08 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:38:05.631 03:04:08 -- spdk/autotest.sh@387 -- # hash lcov 00:38:05.631 03:04:08 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:38:05.631 03:04:08 -- spdk/autotest.sh@389 -- # hostname 00:38:05.631 03:04:08 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-31 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:38:05.891 geninfo: WARNING: invalid characters removed from testname! 00:38:38.035 03:04:35 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:38:38.035 03:04:39 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:38:38.972 03:04:42 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:38:41.507 03:04:44 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:38:44.798 03:04:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:38:46.720 03:04:49 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:38:50.013 03:04:52 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:50.013 03:04:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:38:50.013 03:04:52 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:50.013 03:04:52 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.013 03:04:52 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.013 03:04:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.013 03:04:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.013 03:04:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.013 03:04:52 -- paths/export.sh@5 -- $ export PATH 00:38:50.013 03:04:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.013 03:04:52 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:38:50.013 03:04:52 -- common/autobuild_common.sh@437 -- $ date +%s 00:38:50.013 03:04:52 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715735092.XXXXXX 00:38:50.013 03:04:52 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715735092.iDDoqm 00:38:50.013 03:04:52 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:38:50.013 03:04:52 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:38:50.013 03:04:52 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:38:50.013 03:04:52 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:38:50.013 03:04:52 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:50.013 03:04:52 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:50.013 03:04:52 -- common/autobuild_common.sh@453 -- $ get_config_params 00:38:50.013 03:04:52 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:50.013 03:04:52 -- common/autotest_common.sh@10 -- $ set +x 00:38:50.013 03:04:52 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:38:50.013 03:04:52 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:38:50.013 03:04:52 -- pm/common@17 -- $ local monitor 00:38:50.013 03:04:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:50.013 03:04:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:50.013 03:04:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:50.013 03:04:52 -- pm/common@21 -- $ date +%s 00:38:50.013 03:04:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:50.013 03:04:52 -- pm/common@21 -- $ date +%s 00:38:50.014 03:04:52 -- pm/common@25 -- $ sleep 1 00:38:50.014 03:04:52 -- pm/common@21 -- $ date +%s 00:38:50.014 03:04:52 -- pm/common@21 -- $ date +%s 00:38:50.014 03:04:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715735092 00:38:50.014 03:04:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715735092 00:38:50.014 03:04:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715735092 00:38:50.014 03:04:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715735092 00:38:50.014 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715735092_collect-vmstat.pm.log 00:38:50.014 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715735092_collect-cpu-load.pm.log 00:38:50.014 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715735092_collect-cpu-temp.pm.log 00:38:50.014 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715735092_collect-bmc-pm.bmc.pm.log 00:38:50.582 03:04:53 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:38:50.582 03:04:53 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:38:50.582 03:04:53 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:38:50.582 03:04:53 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:50.582 03:04:53 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:50.583 03:04:53 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:50.583 03:04:53 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:50.583 03:04:53 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:50.583 03:04:53 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:50.583 03:04:53 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:38:50.583 03:04:53 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:50.583 03:04:53 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:50.583 03:04:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:50.583 03:04:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:50.583 03:04:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:50.583 03:04:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:50.583 03:04:53 -- pm/common@44 -- $ pid=1031105 00:38:50.583 03:04:53 -- pm/common@50 -- $ kill -TERM 1031105 00:38:50.583 03:04:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:50.583 03:04:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:50.583 03:04:53 -- pm/common@44 -- $ pid=1031107 00:38:50.583 03:04:53 -- pm/common@50 -- $ kill -TERM 1031107 00:38:50.583 03:04:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:50.583 03:04:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:50.583 03:04:53 -- pm/common@44 -- $ pid=1031109 00:38:50.583 03:04:53 -- pm/common@50 -- $ kill -TERM 1031109 00:38:50.583 03:04:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:50.583 03:04:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:50.583 03:04:53 -- pm/common@44 -- $ pid=1031137 00:38:50.583 03:04:53 -- pm/common@50 -- $ sudo -E kill -TERM 1031137 00:38:50.583 + [[ -n 541552 ]] 00:38:50.583 + sudo kill 541552 00:38:50.594 [Pipeline] } 00:38:50.613 [Pipeline] // stage 00:38:50.618 [Pipeline] } 00:38:50.632 [Pipeline] // timeout 00:38:50.637 [Pipeline] } 00:38:50.651 [Pipeline] // catchError 00:38:50.658 [Pipeline] } 00:38:50.673 [Pipeline] // wrap 00:38:50.678 [Pipeline] } 00:38:50.689 [Pipeline] // catchError 00:38:50.694 [Pipeline] stage 00:38:50.696 [Pipeline] { (Epilogue) 00:38:50.706 [Pipeline] catchError 00:38:50.707 [Pipeline] { 00:38:50.716 [Pipeline] echo 00:38:50.717 Cleanup processes 00:38:50.720 [Pipeline] sh 00:38:51.003 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:38:51.003 1031211 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:38:51.003 1031432 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:38:51.019 [Pipeline] sh 00:38:51.306 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:38:51.306 ++ grep -v 'sudo pgrep' 00:38:51.306 ++ awk '{print $1}' 00:38:51.307 + sudo kill -9 1031211 00:38:51.319 [Pipeline] sh 00:38:51.603 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:03.859 [Pipeline] sh 00:39:04.145 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:04.145 Artifacts sizes are good 00:39:04.160 [Pipeline] archiveArtifacts 00:39:04.167 Archiving artifacts 00:39:04.388 [Pipeline] sh 00:39:04.671 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:39:04.686 [Pipeline] cleanWs 00:39:04.696 [WS-CLEANUP] Deleting project workspace... 00:39:04.696 [WS-CLEANUP] Deferred wipeout is used... 00:39:04.704 [WS-CLEANUP] done 00:39:04.706 [Pipeline] } 00:39:04.726 [Pipeline] // catchError 00:39:04.738 [Pipeline] sh 00:39:05.020 + logger -p user.info -t JENKINS-CI 00:39:05.030 [Pipeline] } 00:39:05.047 [Pipeline] // stage 00:39:05.053 [Pipeline] } 00:39:05.070 [Pipeline] // node 00:39:05.075 [Pipeline] End of Pipeline 00:39:05.116 Finished: SUCCESS